technology
A world without technology; it’s impossible. In today’s increasingly digital world, technology has become a fundamental part of almost every aspect of people’s daily lives. Managing finances, hospital visits, video conferences, wandering through increasingly smarter cities and homes, and even carrying a phone along in a pocket instead of leaving it tethered to the living room wall: everything has been digitized to some degree. Looking at the future, technology will expand its reach and influence how people interact with one another, conduct academic research, and even monitor their health. Nevertheless, advancements in technology have also raised many questions regarding its potentially unethical use. Furthermore, the transition to a more technological society brings an increased desire for privacy and transparency about how information about peoples’ lives is monitored, collected, and used by companies, organizations, and other people. This chapter aims to highlight the future of technological advancements through the consideration of its positive benefits as well as highlighting potential concerns related to technology.
5.1 Digitization - Technology in Every Aspect of Our Lives
As a result of technological advancements, the future of our world will highly be influenced by digital technologies and mediums of communication. A key lesson that we have learned due to the Covid-19 pandemic is that the way we shop, connect, and conduct our businesses can grow within the digital sphere. Our daily lives are interconnected with our household devices, our manufacturing lines are filled with robots, and the way we communicate depends on our Internet connections. Companies are continuously looking to innovate and change the way society interacts. Hence, it is no surprise that promises of future success and profit maximization lie in the hands of digital technology.
5.1.1 Even More Connected: The Internet of Things
Has anyone ever imagined what a day would look like without technology? From the moment a person opened their eyes, they would be unable to check for updates on their favorite social-media applications, they would not able to find the fastest route to their destination, nor would they be able to ask Siri, “What does the weather look like today?” Now, imagine a world that was inextricably influenced by technology and its ability to connect with numerous devices. The moment a person gets up from their bed, motion sensors detect their movements, this then opens the curtain blinds, brings their cup of coffee to a boil, and their home-assistant devices update them on the daily weather reports.

Truth be told, our lives are already highly interconnected with our technological devices. Yet, in the future, interactions with smart devices will only increase. This is the Internet of Things (IoT): an era defined by the fourth industrial revolution whereby devices, from simple sensors to smartphones and wearable devices, are connected via the Internet.  Nevertheless, for individuals, businesses, and organizations, the future and potential of the IoT are limitless. Integrating AI, 5G, and big data, the IoT will transform the way we interact with our devices at home, at work, and throughout cities.

+ Gianluca Ariello
An interesting consequence of the more and more frequent use of technological devices in our daily lives is that we are becoming increasingly dependent on them for many tasks. Some studies have been done recently to analyze the effects of this on brain development, especially focusing on the newest generation, which has been feared to be excessively dependent on mobile devices for accessing information and for social interaction. Surprisingly, no negative correlation has been found so far, leaving with the only worry about the possible consequences of not having access - for any reason - to the internet and/or digital devices.
Source: link

At the individual level, wearable technology will continuously monitor and track user preferences and habits, helping us improve our fitness, allowing us to listen to music wirelessly or connect to our smart devices. From smart wristwear and even medical wearables, IoT technology has the potential to generate huge volumes of personal data for consumers. On a positive note, the sharing of our data across numerous smart devices enables us to reap the benefits of personalization at scale. Our devices can automatically conduct personalized actions from turning the lights on to our favorite colors or suggesting recommendations for a new Netflix series, restaurant, or even holiday destination.  According to leading tech research firm Gartner, the global wearable device market is estimated to see more than $87 billion in revenue by 2023.  In comparison, the current market value of wearable technology is estimated to be $47.89 billion.  This provides an insight into the significant growth of wearable tech over the past few years.

Taking a step back to our homes, smart home devices will develop automated home-support systems for everyday tasks, including energy-efficiency trackers, safety, entertainment, access control, and personal comfort. Smart homes can leverage appliances, lighting, electronic devices, and many more, communicating with each other to learn a homeowner’s habits and developing automated support systems. A smart home does not only provide in-house comfort but also benefits the homeowner in cost-cutting in several aspects. For example, energy-efficient trackers on your thermostat lead to lower energy consumption, which will inherently result in comparatively lower electricity bills. The smart-home business economy is set to cross $100 billion by 2022. 

With a greater perspective at the municipal level, smart cities will integrate all levels of public life for better urban planning, optimized energy consumption, and increased public safety through smart traffic surveillance. With spending on smart-city development set to reach $158 billion by 2022, significant growth is expected from numerous emerging innovations.  Furthermore, the exponential rise is expected to continue as the global smart-cities market size is expected to reach $1.3 trillion by 2028.  This includes officer wearables equipping police officers with real-time information, seamless connectivity between public transportation, smart bins to track waste levels, and pollution trackers.  The practical applications of AI and IoT technology in traffic control are already becoming clear. In New Delhi, an Intelligent Transport Management System (ITMS) is in use to make real-time dynamic decisions on traffic flows.  Ultimately, technology has become a vital component in the creation of more efficient, sustainable, and resilient cities.

For businesses, smart industry devices use real-time data analytics and machine-to-machine sensors to optimize operations, logistics, and supply chains. Industries ranging from manufacturing to mining are becoming more heavily reliant on digital transformation as a means to improve efficiency and reduce human error. From real-time data analytics to sensors within the supply chain, the advances in IoT technology can prevent costly errors within different industries. Furthermore, the implementation of smart devices has the potential to increase workforce productivity and reduce miscellaneous costs . In sum, manufacturers will have access to much more accurate data about what is going on.

With all the benefits of smart devices, it is difficult to imagine any downsides. Nevertheless, the integration of smart devices within the confines of our homes, cities, and industries poses privacy implications. Everything connected to the Internet can be hacked, and IoT products are no exception to this unwritten rule. In the hands of the wrong people, large volumes of personal data can pose a threat to national security and major macroeconomic consequences.  Data security must be improved to keep pace with the advancements in technology and our adoption of smart devices within the comfort of our personal lives.
5.1.2 Will Robots Take Over?
Now, let us begin to imagine a world where robots are an important part of our daily lives. From the way we conduct medicinal practices to the manufacture of essential goods, the future of robots has the potential to make our lives easier and safer. Top tech companies are in a constant race to change the way robots are implemented in people’s everyday lives. Inherently, greater investment in research and development within robots will lead to falling robot prices, greater accessible talent, and seamless integration within supply-chain management or even home appliances. Therefore, as new technologies promise to let robots sense the world in ways that are far beyond humans’ capabilities, the future of robotization promises to change virtually every aspect of human life.

With regards to human-robot interaction, advanced safety systems mean robots can take up new positions alongside their human colleagues. For example, the implementation of sensors, indicating the risk of a collision with an operator, will cause the robot to automatically slow down or alter its path. This technology permits us to put robots and people side by side. In effect, this technology enables the use of robots for individual tasks on otherwise manual assembly lines.  What the novel coronavirus pandemic has illustrated is that robots are the perfect coworkers during a pandemic. Unable to get sick and willing to conduct dull, dirty, and dangerous work, robot helpers can enable human doctors and nurses to do what they do best: use their problem-solving capabilities and be empathetic toward patients. These inherently are skills that robots are having difficulty replicating.  Looking forward, the potential for greater human-robot interaction promises productivity benefits and shapes a world in which people and machines get along without hurting each other.

For businesses, automation systems are becoming increasingly flexible and intelligent, adapting their behaviors automatically to maximize output or minimize cost per unit. With advancements in AI and ML, companies are now able to absorb data from a huge variety of sources as well as analyze them quickly.  Therefore, three important trends will likely result from the rapid adoption of advanced technologies within supply chains. Firstly, robot modules performing simple to medium complex tasks will facilitate the rise of customized solutions. From robots that can take blood samples to being a part of the manufacturing assembly line, companies are likely to rely on robots to increase volume manufacturing efficiencies, bringing down costs significantly. Secondly, robots will be identified as standard automation devices. Led by a range of less complex systems, companies will be able to design robots to complete specific tasks for specialized fields. Lastly, due to advancements in technology enabling the ability to learn, robots will play a central role in creating strategic decisions. Utilizing AI and ML capabilities, robotics will play an important role in navigating business strategies and developments.  The rising sentiment supporting robotics is best reflected by Peter Van Der Putten, assistant professor at Leiden University and global director at Pegasystems: “Smart machines, robotics, and other emerging technologies will fundamentally disrupt existing economic models. How we respond to these forces will shape our society for many generations to come.”

For individuals, there has been recent debate on whether robotization comes at the cost of the availability of work. Although automation technologies will boost productivity and growth, they will bring large-scale transitions for workers, which will, hence, affect multiple sectors, the mix of occupations, the skills required, and the wages earned. More specifically, low to medium-skilled labor will likely be automated by 2030, while high-skilled jobs will experience the most growth. The risk is that automation will exacerbate income inequality and the lack of income advancements that has characterized the last decade of the labor market. Yet, although advancements in technologies pose significant risks to labor opportunities, they also offer some solutions. From the lessons learned from previous industrial revolutions, technology can help create new jobs and new opportunities to earn income, even outside the technology sector. A significant example can be seen in the context of e-commerce, through which related activities such as package delivery and supply chain management have complementarily increased.  Furthermore, the advent of the gig economy enables professionals to conduct work on digital platforms, enabling flexibility and occupational independence. The gig economy is a novel concept whereby companies hire independent contractors/freelancers instead of full-time employees. For example, delivery drivers on popular food delivery apps are a part of the gig economy as well as drivers on ridesharing apps, like Uber. Thus, policymakers and innovators must work together to identify significant solutions to these challenges that will upskill and enhance opportunities for all.

+ Elias Sohnle Moreno
The Danish flexicurity model is an example of how a government is trying to reap the benefits of the gig economy while preventing workers from experiencing the precarious living conditions that are by-products of the changing labor market.
Source: link

5.1.3 The Potential of Blockchain
Blockchain is identified as a decentralized, distributed, and oftentimes public digital ledger. Imagine it as a traditional phone book; each page contains a different set of information, but without the entire book, it does not provide any value. Using blockchain technology, each page is essentially a block that records transactions. While a traditional phone book binds the pages physically together, the blockchain database is managed autonomously on a peer-to-peer basis, which links the information. An interesting thing to note is that when data has been recorded within a blockchain, it becomes very difficult to change.  As a consequence, an individual block cannot be tempered with without an alteration of all the subsequent blocks, it requires the consensus of the network.  Hence, blockchain is an essential element in safeguarding valuable personal information and credentials online. When blockchain record-keeping is used, units are given identifiers, which serve as digital tokens. Hence, participants in the blockchain are given unique digital signatures, which they use to sign the blocks they add to the blockchain. Every step of the transaction is then recorded on the blockchain as a transfer of ownership from one peer to another.

Holistically, this is important for economic development and business operations. Blockchain technology can greatly empower identity provisions for refugees who emigrate without any status. It can greatly help cash remittances for those without access to commercial banks or a financial institution. Lastly, it can improve infrastructure and services around the world. When safeguarding valuable credentials online, blockchain technology can help insure vulnerable populations from attacks or possible breakdowns in the infrastructure. In the case of climate change, blockchain’s verified nodes can help track data and accurately monitor carbon emissions.  With adequate foundations, blockchain technology could eliminate traditional methods of record-keeping, which could save time and reduce costs. In light of future progress, blockchain can be instrumental in giving greater empowerment to vulnerable populations while providing a platform for economic development and improving our environment.

Looking forward, the future of blockchain technology can be influential in developing business operations in addition to piloting economic development and technological change. For businesses, blockchain technology is an enabler of trust and offers greater transparency and efficiency.  With regards to the supply chain, blockchain has the potential to help organizations verify the sources of goods and track the movement of inventories throughout the supply chain.  In a more client-facing role such as customer engagement, blockchain can additionally redefine certain loyalty programs and customer-relationship management for businesses.  Similar to features relevant to protecting identities, blockchain holds great promise with regard to contracts and dispute resolution. Smart contracts are programs stored on the blockchain, which run when predetermined conditions are met; this can eliminate a traditional paper-based credentials system and offer greater transparency for a more efficient verification process.

In terms of technological change, many central banks have explored the idea of using blockchain to improve their nation’s payment infrastructure and issue their own CBDCs. Similar to transferring payments on mobile payment platforms, CBDCs are a digital form of a legal tender created and backed by a central bank.  This means that physical forms of cash are soon being replaced by a digital currency that continues to be a storage of wealth, a unit of account, and a medium of exchange. The advantages of CBDCs lie in their efficiency. Wholesale CBDCs can facilitate more efficient clearing operations between central banks and commercial banks, can lower remittance costs, and enable instantaneous transactions between parties. Furthermore, due to the fact that blockchain offers greater transparency and traceability features, it protects the market from illicit activity, fraud, and money laundering activities. For businesses, this means a fairer market to remain competitive while reducing risk from any fraudulent activities.

+ Martin Bernal Dávila
This implies that Central Banks and governments will not have complete control over their currency because of the blockchain's decentralized nature.

With regards to digital entertainment, non-fungible tokens (NFTs) have redefined the ownership and authenticity of artwork toward a more digital nature. Remember the traditional phone book; imagine that the phone book also records the owner of a specific painting. Until someone purchases an NFT from the owner, the phone book will continue to show the ownership of the digital art. Traded through blockchain technology, NFTs are simply digital assets. From artworks to tweets, NFTs allow creators to have control over their digital art as each sale is recorded on the blockchain, through which it cannot be altered by just anyone.  Due to its guaranteed ownership, both artists and collectors have gained confidence in NFTs. So much so, that data suggests people were buying and selling more than 85,000 NFTs in May 2021, amounting to a total trade value of $5.8 million in a single day.  Ultimately, authenticity within the blockchain ecosystem also promotes transparency.  As everyone can see who owns a specific painting, it provides certified proof of ownership that will last for eternity. This is the key differentiating factor that has made NFTs an asset of the future.

Considering the future, blockchain has the potential to benefit us all. Blockchain technology can significantly help society by improving trust, transparency, and efficiency while facilitating innovation.
5.1.4 Living in the Digital World: The Metaverse
Now, imagine the world through the lens of a virtual reality (VR) headset. A world where the physicality of touch is replaced by interactions of digital avatars shared within a connected online universe. Similar to the concept of science fiction novels and movies, such as Ready Player One, The Matrix, and Tron, the metaverse is an iteration of the Internet supporting persistent online 3-D virtual environments. The metaverse does not have a clear, uniform definition so far. Jean Folger defines the concept for Investopedia as follows: “The metaverse is a shared virtual environment that people access via the Internet. Technologies like virtual reality (VR) and augmented reality (AR) are combined in the metaverse to create a sense of virtual presence.”  An opportunity for major tech companies to push the boundaries of connectivity, the metaverse has become the newest macro-goal. The metaverse is emerging as a generational change in how digital interactions and commerce unfold. Supporters of the metaverse say the new digital world will have a profound effect on our day-to-day lives, similar to the advent of the Internet or the invention of the telephone.  A key component of the metaverse ecosystem is the communications architecture enabling persistent connectivity within the virtual space. This brings enormous opportunities for individuals and artists, providing access to other creative thinkers and unlimited access to fundamental tools. To individuals who want to work from and own homes in today’s urban centers, it opens the door for digital workplace-collaboration software. Furthermore, to people who live in places where opportunities for education or recreation are more limited, it enables a realm for immersive education and virtual learning.  The bets that Facebook parent Meta Platforms Inc, Microsoft Corp., and others are placing on the metaverse have reflected a growing belief that it is the next evolution for social connections. From harnessing greater social connections to providing an environment for immersive learning, the future of the metaverse is likely to become the gateway to most digital experiences, a key component of all physical ones, and the next greater labor platform.

+ Elias Sohnle Moreno
The metaverse is an abstract concept. It can refer to various things depending on the context in which the concept is used. The term metaverse has been attached to so many ideas that it has stopped having a common meaning. It is normal to be confused by the concept.

For businesses, the digital world offers a new way to provide an innovative product and engage with customers. Companies will need to transition their marketing strategies from investing in social-media campaigns to placing ads throughout the metaverse.  Within the retail sector, consumers will be able to rotate, test, and try items of clothing through their digital avatars. Digital clothing, world-building, and even marketing will have a large impact on brands to sustain consumer retention. In terms of digital advertising, the harnessing of data within the digital sphere provides advertisers with avenues to experiment with immersive ways of building brand recognition.  Customers will not just be able to talk to brands on social media. They will also be able to interact with them in a 4-D realm. For entertainment opportunities, numerous companies have already experimented within the digital realm. For example, the popular video game Fortnite recently hosted a huge performance by rap artist Travis Scott. Meanwhile, Massive Attack headlined a music festival hosted by Minecraft. As more events are hosted within the digital realm, companies will have a plethora of opportunities for potentially profitable sponsorships.  As a new iteration of the Internet is being worked on, this will have massive implications for society. Marketing, communications, and branding professionals will face new challenges but also new opportunities for their businesses.

Harnessing opportunities for social connections, the metaverse will provide an immersive experience for consumers. The pandemic has already shifted culture online. For social connections, this will unlock new experiences for social gatherings such as weddings or even corporate meetings, allowing users to express themselves in more immersive ways. Within the education realm, the confines of the classroom will be eliminated and replaced with a learning environment, defined by the senses of touch, sight, and hearing. For gamers, it provides one of the most immersive experiences for fantasy worlds or traditional games, all while connecting users from all over the world in real-time. Testing the potential of the fitness industry, users will be able to work out within the digital world with friends and athletes from all around the world, connected through AR and VR technology. In short, the potential of the metaverse is that it can redefine the way we interact with each other.

+ Elias Sohnle Moreno
According to Jody Medich, the current state of graphical user interface (GUI) “limits our ability to visually sort, remember and access information, provides a very narrow field of view on content and data relationships and does not allow for data dimensionality”. Extended reality technology enables the visual representation of information in a three-dimensional space, unlocking the users’ ability to fully utilize spatial cognition in the digital world. Humans have evolved to thrive in a 3-dimensional environment, and the leap from traditional GUIs to VR brings tremendous implications for human interaction with the virtual world.
Source: link

The metaverse will unleash amazing creativity and open up new frontiers for brands, businesses, and the consumer experience. Ultimately, the future of the metaverse will redefine the way we interact, create, and connect.
5.2 Big Data, the Internet, and Connectivity
As of 2021, more than 60% of the world’s population is online   and two-thirds of the people on Earth own a mobile device.  Each day in 2020, about 1.3 million people joined the Internet , and by the year 2030, there will be billions of computers, sensors, and robotic arms scattered throughout all the places and infrastructure where humans live their lives. For the first time, there will also be more computers and sensors than human brains and human eyes,  Each of these realities points to the fact that connectivity—both to the Internet and to technology—is an integral part of life for many people around the world and will likely become even more so in the coming years.

As discussed in the digitization section, the continued pace of technological development means that the world will be influenced by digital technologies, experiences, methods of communication, and more. Living in a digital world comes with the presence of big data. “Big Data” is a somewhat vague term, which, depending on the context in which you hear it, could refer to a variety of things. It can refer to the large quantities of digital information produced by the online and offline facets of life or to the processes developed and implemented by researchers to analyze this data and derive meaning, insights, and even profit from them.  In this section, big data refers to the former: in other words, lots and lots of digital information. The IoT, the metaverse, and the digitization of our daily routines all produce or will produce incredible amounts of information as they continue to develop and scale. The presence of so much human-generated data at both the individual and societal scale means that technological developments for the analysis, repurposing, and mining of this data will likely continue to emerge in the future.

Past cases of data mismanagement have brought big data to the forefront of discussions about ethics in technology. These cases also indicate that when it comes to user data and to the innovations that have enabled all of this data to be produced, privacy and transparency will continue to be very important issues. The more users come online, the more data they produce—and the more relevant privacy and transparency become.
5.2.1 Freedom and Transparency
In 2025, an estimated 174 zettabytes (ZB) of data will be produced, captured, or replicated online and offline.  For reference, one zettabyte is equivalent to one trillion gigabytes (GB).  If one GB of data is equivalent to about 2 hours of streaming video on YouTube or another service, 174 ZB is equivalent to 348 trillion hours of streaming video  —enough for every person on Earth to watch for about five uninterrupted years, 24 hours a day. Furthermore, by 2025, 75% of the world’s population will be interacting with data every day, on average every 18 seconds.  As mentioned in the 2020 Global Risk Report by the World Economic Forum, data “are increasingly being collected on citizens by government and business alike…these data are then monetized and used to refine the development and deployment of new technologies back toward these citizens, as consumers.”  With so much data being collected and so many new technologies making use of it, it is unsurprising that the subjects—everyday citizens—have begun to scrutinize both the organizations and the methods involved in the process, calling for more transparency about what is being done.

One way in which the trend toward openness and transparency in technology is evident is in the rising number of calls for open data and a more open Internet. In the words of data and intellectual property (IP) privacy lawyer Dr. Paulius Jurcys, one of the key debates in the data privacy space is over who should own personal, cell phone, and smart device data. To him, the answer is simple: “The vast majority of us would agree that individuals should be the owners of the data they generate.”  There are a number of companies and organizations that aim to put control of personal data back into the hands of the users that generate it. For example, “consent management” companies such as Prifina make it possible for phone applications to run using a user’s personal data without that personal data ever leaving the user’s possession. In other words, the application can function without either the application developers or the consent management company ever gaining access to the user’s data. 

The EU’s General Data Protection Regulation (GDPR) has set data privacy standards that often require companies to change their operating procedures in order to stay compliant. Consent management platforms such as Piwik Pro and Cookiebot offer software that helps businesses to gather informed consent from their customers and website visitors, along with other related actions.   DuckDuckGo launched in 2008 as an alternative search engine and pledged in 2010 never to track or sell its users’ data. Since then, they have expanded into a full-fledged Internet privacy company, building free mobile and desktop products that allow users to browse the Internet privately and block trackers from websites, email services, and mobile applications. DuckDuckGo CEO Gabriel Weinberg describes their work as “building a simple privacy layer for how people use the Internet today, without any tradeoffs. It’s privacy, simplified.”  With 3 billion monthly searches and 5 million monthly software downloads, DuckDuckGo’s numbers are still much smaller than Google’s approximately 1 billion daily active users and 6.9 billion daily searches.  However, the company’s search metrics and user base continue to grow quickly as more and more Internet users look to protect their privacy online. 

In June 2021, civic entrepreneur Frank McCourt launched a $100 million initiative called Project Liberty, which aims to build a more equitable and collaborative web.  The project will, among other things, use blockchain to build the Decentralized Social Networking Protocol—a new type of Internet infrastructure that can “democratize social media data.” McCourt describes violations to social media users’ data privacy as one of the motivations for rebuilding the Internet, saying, “I don’t think people until recently understood how fundamentally broken it was and how much damage was being done because of this broken model and the abuse of the data.”   These concepts also tie in with the related concept of the open Internet, which ensures that Internet users can “access the content, applications, and services of their choice,” and “promotes competition among network, services and content providers.”  A key component of an open Internet is net neutrality. Net neutrality requires internet service providers (ISPs), which customers subscribe to in order to access the Internet, to treat all the information that they receive and transmit equally. This means that they cannot block or slow down Internet traffic unless it is necessary for a specific legal, security, or temporary service issue. This rule prevents ISPs from choosing to transmit, withhold, or charge differently for data from certain websites or content based on what it might contain, which would violate the user’s right “to be free to access and distribute information and content, use and provide applications and services of their choice.”  

The state of the Internet as a neutral, accessible site for expression, debate, research, and connection (and more!) could falter in the coming years. This depends on many factors, including whether wide-scale data privacy legislation such as the GDPR in the EU is upheld. It also depends on whether net-neutrality laws in Chile, Canada, the United States, and other countries around the world are upheld.

The 2021 Freedom on the Net Internet freedom assessment found that Internet freedom declined globally for the eleventh year in a row.  In the U.S., this decline was linked to widespread misinformation spreading online. In Belarus and Myanmar, government officials targeted online journalists and shut down independent news outlets. In at least forty-five countries, authorities are suspected of obtaining spyware or data extraction technology from private vendors, and officials blocked Internet or social media access in at least twenty countries.  Lastly, it depends partially on how social-media companies such as Facebook, Instagram, and Twitter continue to influence the way that discourse happens. Because social media has become one of the primary ways in which people receive their news—as well as a prominent discussion forum—these entities have an enormous influence over the flow of information and emotion at a societal level. Therefore, the freedom and neutrality of the space are dependent in part on how responsibly these entities handle that influence. For example, a 2020 Pew Research survey found that 53% of adults in the U.S. “often” or “sometimes” get their news from social media sites such as Facebook, YouTube, Twitter, Reddit, and Instagram.  Given that these platforms have the ability to modify information-sharing algorithms and deactivate accounts at will, there are valid concerns about how much power they have to shape political debate and even indirectly cause violence. 

+ Elias Sohnle Moreno
Sometimes companies like Facebook have incentives that conflict with the end-users' interests.
Source: link

The desires for transparency and more user control over data and information also rear their heads in a different context: in academia and research, especially in science, technology, engineering, and math (STEM) fields. Formed largely of academics and researchers, the open science movement aims to make scientific research and data accessible to everyone. The main goals are to make scientific papers open access rather than kept behind an academic journal’s paywall, to communicate scientific knowledge, and to make the research process more transparent and accessible.  Subfields like citizen science allow people to take part in research by helping to collect or interpret data for an academic project. The Unpaywall.org database compiles open access scholarly articles from 50,000 different publishers and repositories. SciHub, a self-proclaimed pirate website that has provided “mass and public access to tens of millions of research papers,”  is another manifestation of the movement for open science. Launched in September 2011 by graduate student and computer programmer Alexandra Elbakyan, the website allows users to bypass academic journal paywalls and read or download research papers for free. The aim is to “fight inequality in knowledge access across the world” and equalize a power imbalance that makes it easier for educated people and especially people in rich and Western countries to access knowledge above everyone else. SciHub has been sued for copyright infringement multiple times and has been blocked in eleven countries, typically reappearing under a new web domain.   SciHub now faces a court case in India filed by a group of major publishers. They want Indian ISPs to block the website but according to legal experts, the court may actually rule in favor of SciHub.  If this happens, it will be a huge victory for open science and will likely challenge the hierarchy of the academic publishing world.

As the Internet continues to be a crucial part of life in the coming decade, and Internet users generate ever more personal data, users’ privacy concerns will continue to grow. It will become even more important for companies and organizations to keep their data collection and uses of it transparent. Companies that help promote freedom, privacy, and transparency on the Internet will keep gaining popularity, and governments around the world will feel the pressure to protect the freedom of the Internet. The safety and privacy of users and their data depend largely on the strength of the regulations meant to protect them, and these regulations might become threatened if any given country turns toward political turmoil or authoritarianism. It is likely that Internet users will increasingly choose to safeguard their data in their online lives and consider avoiding platforms that do not make that possible.
5.2.2 Web 3.0: “The Financialization of Everything”
Web 3.0, also known as Web3, is a vision of a new and decentralized Internet built on the foundation of Blockchain and “owned by the builders and users” (as described by Internet investors and entrepreneurs Chris Dixon (@cdixon)  and Packy McCormick (@PackyM)  on their Twitter feeds). It is also a phenomenon that nods toward people’s increasing desires for transparency and control in their technological lives.

It helps to think of Web 3.0 as an evolutionary step beyond the versions of the Internet we have had previously. Web 1.0, also known as the “read-only web,” was the first version of the Internet. Active between 1990 and 2000, users were largely passive; they could read content from the producers of a given site but could not meaningfully communicate back to them. This web was characterized by static and personal websites. Web 2.0, the social or read-write web, flourished from 2000 to 2010 and, it can be argued, continues even up to today. Here, users could communicate with one another and interact with the websites they visit. Every web user has the ability to be a content producer, and their content is distributed freely by platforms that aggregate this content and find ways to make money from it.   These platforms, companies like Facebook, Spotify, and digital publishing site Medium, have ended up controlling large swathes of the Internet this way. While, in theory, the Internet is open for anyone to claim a space, it is very difficult to pull traffic from or make money independent of the dominating Internet/social media companies. That is why Web 3.0, or the Semantic or read-write-execute web, is often proclaimed as the future of the Internet. Mason Nystrom, a research analyst at cryptocurrency company Messari Crypto, describes it in the following way: “In short, Web3 is a trend of democratizing the Internet—taking all existing protocols and services, from Internet providers to daily apps like Spotify, and building them on permissionless blockchains with open protocols and open standards  …so that they benefit people rather than entities.”  In other words, Web3 will still have all of the Internet services and functionalities that users have come to love and rely on but they will be built in a transparent manner that gives people access and invites them to join in ownership of the community.

So, what can Web3 do? Web3 is a cryptocurrency-enabled content platform. This means that digital currency can be used to make anonymous, secure purchases directly within Web 3.0.  For each action users take within the system—for example, using their computers to help host data that other users can access—they receive a digital token that gives them a small stake in the Web 3.0 system and could, in theory, be exchanged for cash at a later time. So, the user essentially becomes a partial owner of the platform.  This could explain why Web3 has also been described as “the financialization of everything.”  According to Mason Nystrom, right now we are still largely in the phase of Web 2.0 characterized as “a subscription era” in which large aggregating platforms like Spotify and Substack are taking most of the value that user-contributors produce. “Sometimes (like Spotify), very little value goes to individual artists. But top Substack writers make millions of dollars per month (as compared to tens of thousands on Medium) because monetizing [as an individual person rather than relying on a content aggregation platform like Facebook or Medium] is more effective.”  A crypto-enabled platform will allow users, for example, an independent artist, to assign a digital representation (NFT) to a piece of digital art so that they can collect a royalty on it each time it is sold in the future. In this way, an artist or creator can continue to make money off their earlier works, which were worth much less at the time of the first sale than they are once the artist has built up their career.

People who are passionate about Web 3.0 are excited by the promise of putting control back into the hands of individual users, and taking power and profit away from the large platforms, which control our current version of the web. As Nystrom further explains, “The platforms of today have largely built their networks off of the backs of individuals [think Spotify, Medium]…and those individuals don’t get the value that’s being extracted from them. Another key principle is that data is also a very powerful force. As a company gets more data, it can produce more services and applications. That centralization of data is important because it’s hard for any sort of innovation to happen—it’s hard for a new startup to compete with the incumbents. So, making sure that that data is open and available for other applications and companies to utilize is important, and also making sure that data is not abused.” 

However, despite the surging popularity of Web 3 in (primarily) technologist and tech investor circles, there are also doubts about whether Web 3.0 will be successful and even about whether it represents a distinct phenomenon or is just a passing fad. Indeed, scrolling through tech-focused online publications and spaces like LinkedIn shows dozens of articles, discussion panels, and even funding opportunities related to a combination of Web3.0, blockchain, and “defi” (short for decentralized finance). But often the distinction between these terms is not made clear. Some critics of Web 3.0 believe it will simply replicate the data privacy problems caused by current tech companies because any actions users take in the Web 3.0 environment will be recorded and publicly stored on the blockchain.  Others do not see Web3 as serious competition for tech giants like Google because not enough people are actually buying digital assets and moving away from the major platforms.  To date, less than 10% of the global population owns cryptocurrency, and Web3 applications have tens of millions of users compared to the billions on traditional web 2.0. But at the same time, in 2021, investors put $30 billion into cryptocurrency startups.  In a December 2021 episode of the Modern Finance podcast, Reddit founder and tech investor Alexis Ohanian stated that “at least half” of his venture capital fund’s upcoming investments “will be Web3.” He sees the rate of developments in the Web3 space “continuing to accelerate,” and expects his fund’s investment patterns to include even more Web3 in the future. He takes his cues from the trend he sees of “talented people, [both] on the design front and the product front, saying they want to spend the rest of their life working on” the cryptocurrency space. This group includes veterans of tech innovation like Coinbase founder Brian Armstrong; people “who are building stuff [in Web3] who I know are seeing where things are headed.”  So, while the questions and criticisms of Web3 are certainly valid, only time will tell how the ecosystem will develop. But one thing is clear: the current burst of innovation around Web3.0—and calls for more transparency, accessibility, and community-mindedness in the realms of scientific and technological innovation—points to a genuine desire to reshape the way the Internet functions and to create a more equal system for everyone who accesses it.
5.2.3 Human-Centered Data and Tech
Behind every data point captured in a mobile application or logged in a database, there is a person somewhere on Earth; a person with hopes, dreams, personal preferences, and a lifetime of personal and individual choices to be made. As society becomes increasingly machine-driven and data collection and usage become even more normalized, this can be easy to forget.

One of the dangers of constant and impressive technological development is that people can start to believe that technology is neutral or by definition good—that any development is a positive development. In reality, the opposite is true—as discussed in the Digital Ethics section, it is possible to replicate unjust systems or embed human biases into algorithms, products, and technical systems. In addition, sometimes developments that appear to improve the user experience are mainly meant to help the company. For example, usually when companies collect user data, they are doing so because ultimately, it allows them to make more money from the users by offering personalized recommendations or other services. While this might be a benefit to the user, it primarily makes a profit for the company and might also introduce new complications in terms of privacy and data security.

Human-centered design (HCD) focuses on how products, services, and systems can best enhance the life of the people it is being designed for. While this may sound like an obvious goal, it marks a slight shift away from the standard design approach sometimes called user-centered design. Historically, the standard way of designing technology often focused on creating a new, advanced product, sometimes (but not always) to fill a specific need. For example, the computer. HCD focuses on the needs of the product’s users and considers how the product should be designed to best accommodate a human user: for example, prompting a user to save their work before turning off their computer so that they actually make use of the computer’s capabilities.  

Human-centered data and technology apply this same worldview specifically to the digital and technological worlds. When it comes to technological systems, they should be discoverable: in other words, users should be able to “find out and understand what the system can do.”  Otherwise, the innovation does not provide any useful value to the user. It also urges humans to ask questions about the value that technology brings to life but also what changes or sacrifices it demands and whether they are worth it. In her 2017 Medium essay, “The Tech Humanist Manifesto,” self-proclaimed “tech humanist” Kate O’Neill states that “We need technological progress…but for our own sake, and for the sake of humans who come after us, we need to wrap that progress around human advancement.”  She encourages everyone to consider how humanity as a whole can best build and deploy the technology that is so interwoven in their lives.

Although they may not always call it human-centered data and technology, many people and institutions are pushing for technology to respect and, more ethically, enhance human life rather than disregard human rights or base development on seeking profit. In September 2021, the World Economic Forum and the City of Helsinki, Finland, brought innovators together from across the globe to discuss “A human-centric approach to data for progress, people, and the planet.”  Design schools increasingly offer education in HCD principles, and more people working in technology and data science are also beginning to advocate for a human-centered approach to their work and design goals. While these changes are undoubtedly positive for product users, they often also make good business sense. As one “creative data scientist” and AI business owner wrote in an essay about human-centered data science, many businesses struggle to successfully implement data science and ML products because “their true impact is lost through the often irrational, biased, and difficult-to-predict humans who are tasked with using them.”  In other words, the products have often not been designed with the human-ness of the human user in mind.

+ Kim Tan
In my whole college experience, I would say that one of the most life-changing moments I had was when my professor introduced the Design Thinking principle. Design Thinking positively advocates observation and empathy to derive the best creative, innovative solutions to fulfill consumers' needs. Ever since then, I've adapted and applied the human-centered mindset in various aspects as an individual and a business major, completely redefining my outlook on conscious innovation/technology/business. 

One book I highly recommend for this is Creative Confidence by brothers Tom and David Kelley.

One of the positive aspects of technical innovation is the fact that emerging technologies offer hope. As author and strategic consultant Kate O’Neill writes in her book A Future so Bright, “The future will be what we do the work to make it…emerging technology brings with it tremendous power and offers the potential to solve human problems at scale.”  The flip side, however, is that these technologies must be built with human, societal, and even environmental needs in mind in order to effectively solve these problems. Approaching data and technology in this way will help improve technology safety standards, improve patient care in healthcare systems, (re)build trust in institutions, and much more. 

In the words of Yannis Kotziagkiaouridis, Global Chief Data and Analytics Officer, Edelman Data & Intelligence, “Data shouldn’t be characterized as the new oil or gold because it is not best acquired through extraction. Nor should it be considered a currency because it must confer value to others beyond those who hold it. Data should be treated as a mutually beneficial gift. One best given and received from a place of empathy.”  It is notable that engineers, data scientists, and others in technology management positions are waking up to the importance of respect and empathy in technology design. By adopting the mindset of HCD in data and technology, we can ensure that the technological innovations of the future will help humanity to create a more positive world rather than just a richer one.
5.3 What Can Technology Contribute to Healthcare?
Worldwide, hospitals and healthcare facilities have been facing catastrophic financial challenges related to the Covid-19 pandemic.  The pandemic has also imposed a burden on healthcare workers’ general well-being. Many healthcare workers have to work long shifts and experience stress and anxiety, affecting their ability to cope.  Ever since the pandemic raged around the world, for most individuals, human health has probably never been so front and center compared to pre-pandemic life.  Suddenly, people were interested in vaccine pipelines: How is it possible that it used to take ten to fifteen years to develop a vaccine and suddenly it only took less than a year? People are more engaged in discussions regarding health and how far governmental institutions can go to impose healthcare mandates on people.  Just like how technology significantly accelerated the e-commerce industry, technology is playing an increasingly bigger role in healthcare. This section will explore developments in the intersection between human health and technology.
5.3.1 The Digitization of Health
A major change, accelerated by the pandemic, is the rise in telehealth. Early in the Covid-19 pandemic, telehealth usage surged as consumers and providers were looking for ways to safely access and deliver healthcare. In the last few years, the terms telemedicine, telehealth digital health, and virtual healthcare have been used interchangeably.  Going forward, the term digital health will be used to refer to the broad scope of digital health including categories such as mobile health, health information technology, wearable devices, telehealth and telemedicine, and personalized medicine.  The Worldwide Digital Health market is expected to grow from $84.08 billion in 2019 to reach $220.94 billion by 2026, indicating that the market will be approximately 2.5 times larger than before the pandemic.  There are different technologies that are transforming the healthcare industry as we know it. This section will focus specifically on some of the most important developments.
5.3.2 Telehealth & Telemedicine
According to a McKinsey & Company report, telehealth use has increased thirty-eight times compared to before the pandemic.  Despite the significant adoption of telehealth early in the pandemic, the telehealth utilization rate has generally been falling in 2021.  Some benefits of telehealth adoption include increased convenience for receiving routine care and improved access, especially for behavioral health and specialty care; also, improved care models and health outcomes, especially for patients with chronic conditions.  The future of healthcare is most likely a hybrid model, where patients receive a mix of virtual and in-person care. This shift to hybrid care could reduce some of the major problems in the healthcare system. For instance, the United States has a shortage of primary care clinicians, which means it can be difficult to get an appointment. The shortage is especially acute in rural and poor urban communities. Many patients end up turning to urgent care or emergency departments, which turns out far costlier than a standard office visit. 

Although hybrid healthcare offers a lot of promise, the world should be careful with increasingly digitized healthcare. There are concerns that safety and privacy may be compromised by rapid deregulation to enable telehealth. There is also a risk of a digital divide occurring in hybrid healthcare models. Generally, virtual healthcare can improve access to care for underserved communities.  However, what will happen to patients who do not have access to the Internet or cannot afford other remote monitoring tools needed to make virtual healthcare work? In an increasingly aging world, how will virtual healthcare affect (digitally) illiterate individuals? In the future, the healthcare system must consider how to make hybrid healthcare accessible without leaving anyone behind.
5.3.3 personalized healthcare
Imagine having a complete and accurate depiction of someone’s overall health. Instead of having to wait for symptoms to arise and get treated, people would be able to keep track of their health and prevent sickness instead. Advances in precision medicine and AI are making this personalized healthcare a reality.

+ Stefanie Sewotaroeno
I'm so glad that this gets its own focus, as lately, I have been wondering: what will be illegal in the future (for health reasons)? For example, asbestos was widely used in construction not too long ago, but now it is banned. Cocaine is another example. We have medications in our cabinets that are accompanied by a long sheet with possible side effects, ranging from common to rare. Will one or more of these medications become illegal in the future? If yes, why? I'm very curious about how this will develop in the future.

People’s individual health is heavily influenced by their lifestyles, nutrition, environment, and access to care. Behavioral and social determinants and other exogenous factors can now be tracked and measured by wearables and a range of medical devices. These behavioral, socio-economical, physiological, and psychological factors account for about 60% of people’s determinants of health. Genes account for approximately 30% of people’s health determinants and people’s medical history only accounts for 10%. Over the course of a lifetime, humans will generate the equivalent of over 300 million books of personal and health-related data, which could unlock insights to a longer and healthier life. Currently, however, a vast amount of untapped data, which could have a great impact on understanding people’s health, exists outside of medical systems.  An example of a technology that can be used to improve data insights in healthcare is that of wearable technologies. This consists of wearable or implanted devices or sensors that are used for monitoring and logging vital parameters of patients. Patient data collected from wearables enables valuable insight on the prognosis of a patient’s condition in a natural environment for a more extended period, which is essential for accurate and faster diagnosis. 

+ Diede Kok
We sometimes forget how much progress humanity has made in the medical field over the last century. It was only 1884 when Theodore Roosevelt's wife and also his mother died on the same day due to post-pregnancy kidney failure and typhoid, respectively, leading the president to write: 'the light has gone out of my life.' Many commonplace ailments are now reduced to the history books, and the despair that Roosevelt felt is increasingly rare, preventing millions of people from losing their light due to losing a loved one. Source: Leadership In Turbulent Times, by Doris Kearns Goodwin, p.125.

Advances in precision medicine manifest into tangible benefits, such as early detection of disease.  Consequent personalized treatments are becoming more commonplace in health care.  Precision medicine, when integrated into healthcare, has the potential to generate more precise diagnoses, predict disease risk before symptoms occur, and design customized treatment plans that maximize safety and efficiency.  The power of AI technologies to recognize sophisticated patterns and hidden structures has enabled many image-based detection and diagnostic systems in healthcare to perform as well or even better than clinicians in some cases.  AI and precision medicine are converging to assist in solving the most complex problems in personalized care. 

+ Benjamin Von Plehn
Very interesting to see how AI or ML already has an impact on our health. 
There is also robot-assisted surgery that I find interesting. The surgery is done with precision, miniaturization, smaller incisions, decreased blood loss, less pain, and quicker healing time. These are mainly used in the US and are soon to be found more often in Europe.
Source: link

Current healthcare practices are not always perfect. Where humans operate, mistakes can be made. Despite technology sometimes performing equally well or even better, it does not mean that handing over healthcare-related work to technology will eliminate every problem.  Health data can be biased while building and processing the dataset. Data can lack diversity, and contain missing values. People of Caucasian descent, for example, are a minority in the global population yet make up nearly 80% of the subjects in human-genome research.  AI models trained on such data might amplify the bias and make unfavorable decisions toward a particular group of people characterized by age, gender, race, geographic, or economic level. Such unconscious bias may harm clinical applicability and health quality. 
5.3.4 Gene Editing: Perfect Health
Imagine having the ability to choose hair color, type of hair, eye color, the shape of the nose, and personality traits. This probably sparks the visual of a video game played in the past where it is possible to create a character. This ability to pick and choose one’s appearance is not confined to the digital world anymore. Advances in gene editing have made it possible to alter the human genome and could enable humans to customize their embryos and babies. 

CRISPR is a powerful tool for editing genomes; it allows scientists to easily alter DNA sequences and modify gene function.  First discovered in 1987, CRISPR has made significant advances ever since. Nowadays, CRISPR is shorthand for CRISPR-Cas9, the fastest, cheapest, and most reliable system for editing genes. This groundbreaking innovation has allowed for treating disease in a small number of exceptional and/or life-threatening cases.  Currently, many researchers are exploring gene editing on animals or isolated human cells with the goal of using gene editing as a more widespread way to routinely treat genetic diseases in humans. Examples include inherited eye diseases, neurodegenerative conditions such as Alzheimer’s and Huntington’s disorders, and non-inherited diseases such as cancer and HIV.  It seems rather inconceivable that diseases that are currently plaguing the world and millions of people and their loved ones can be “simply turned off.” CRISPR could mean that in the future, many genetic diseases become a thing of the past.  This does not imply, however, that all humans will be perfectly healthy, since genes account for approximately 30% of people’s health determinants.  External factors such as climate change, other environmental changes, and new infectious diseases, could still adversely affect people’s overall health. 

The potential positive impact of gene editing is indisputable. Nevertheless, as with any other technology, debates regarding the limits of what can and should be done come into play. Should humans even be given the power to determine whether and with which diseases people should live? In the case of a life-threatening disease, many people would agree that eliminating it with gene editing has a net positive effect. In the case of blindness and deafness, for example, the answer is not as clear-cut.  Is it acceptable to prevent a child from becoming deaf or blind or to make them deaf or blind? To hearing and seeing people it may be inconceivable to live without these senses and they might think it is acceptable. However, “many deaf people consider themselves to be part of a community with a strong identity.”  This illustrates how the usage of gene editing is a complicated moral consideration.

The biggest gene-editing controversy to date occurred in November of 2018, when Dr. He Jiankui claimed to have produced the first human babies born with CRISPR-cas edited genomes.  The doctor, who embarked on this project in secret, was jailed for three years, and has since coined the nickname Doctor Frankenstein.  As the potential to genetically modify embryos is becoming increasingly likely, what will be the consequences of creating super babies in the long term? What even are the consequences of making changes to embryos’ genes for future generations? Some believe it may have devastating consequences for humanity. Nowadays, there is an increasingly large movement of people seeking acceptance for who they are and refusing to conform to the standards of being perfect. Developments in gene editing might be a shift in a different direction again. Is a world with only conventionally attractive and healthy people desirable? Is the randomness of what humans look like and what we are like not part of what makes them human? A threatening consequence of gene editing is that it could reduce human diversity. Social inequality is also likely to increase since rich people can afford to genetically alter their ways through the world and poorer people cannot.  Without coherent global legislation, gene-editing tourism where people travel the world to design their babies is not an unfathomable prospect. 

+ Elias Sohnle Moreno
Beyond the sociological benefits of diversity, genetic diversity protects humans against diseases and favors adaptability. Loss of genetic diversity could present an existential threat for the species in the long run in the event of future pandemics.

+ Elias Sohnle Moreno
Here is another very interesting debate in game theory when it comes to gene editing. It is safe to say that European ethical safeguards will bridle the application of gene editing. But what if other nations engage in widespread adoption of gene editing, notably to enhance their population. Would this prompt Western nations to do the same, at the expense of moral and ethical safeguards? Would nations accept the potential emergence of "genetically superior" humans in the long term? The disparities in ethical and moral standards across nations are very important in the case of gene editing.

Gene editing and biotechnology are not only applied in the healthcare sector; biotechnology can also be applied in the agriculture, aquaculture, and food sectors; consumer products and services sector; and materials, chemical, and energy sectors among others. The table titled ‘Examples biotechnology applications and estimated time horizon of acceleration point’ provides a comprehensive overview of different biotechnology applications in different sectors divided per future time horizon. 

+ Elias Sohnle Moreno
We've been voluntarily influencing genes for thousands of years via selective breeding of crops and livestock.

examples biotechnology applications and estimated time horizon of acceleration point 
5.3.5 Saving Lives? 3D Printing Organs
In 2020, in Europe, an average of twenty-one patients died every day while waiting for organ transplants.  In the future, such deaths may be something of the past due to advances in 3D printing. There have been some breakthroughs in 3D organ-printing technology, allowing scientists to print corneas, livers, and hearts among other things. 3D printing allows for the exact replication of the organs that need replacing and circumvents the problem of organ rejection that can take place in traditional organ transplants. Currently, some scientists are conducting experiments in space in order to develop working organs. The reason for this is that when 3D-printing tissue on Earth, there’s a tendency for it to collapse in the presence of gravity. It is estimated that it could take another ten to fifteen years before fully functioning tissues and organs printed in space can be transplanted into humans. In an already aging and growing population, the implications of expanding people’s lifespans by years or even decades through organ transplantation will have far-reaching implications. With increasingly scarce resources on Earth, is it ethical to extend people’s lifespans even further? What will happen when organ transplants are not only used to save lives but to create customizable super humans?

+ Kim Tan
According to the National Foundation for Transplants, a standard kidney transplant can cost an estimated $300,000 or more, while a 3D bio-printer used to create 3D printed organs will only cost $10,000 - $200,000. Furthermore, scientists at Wake Forrest are currently testing 3D machines to enable skin printing directly to the patient's body — a hopeful innovation for burn and trauma victims. 

The future of 3D healthcare technology does not just revitalize optimism for individuals waiting for an organ donor, but it also reduces the possibilities and dangers of organ trafficking. Will this also be new profound hope for a safer world? 

Read more in these sources: link, link

5.3.6 Robots: Bridging the Healthcare-Worker-Gap
Having a nurse called Ava, Tommy, Yumi, Stevie, Moxi, or Grace is probably nothing out of the ordinary. But what if these are the names of Robot Nurses, which are increasingly being adopted in healthcare? While this may sound like the plot of a movie, it might be an inescapable reality that robots join the healthcare workforce. By 2030, it is estimated that there will be a global shortage of 18 million healthcare workers. This shortage, combined with advances in technology, might leave humanity with no other choice but to increasingly depend on robots in healthcare.

The usage of robots in healthcare is not new. Currently, robotics is used in minimally invasive surgery, patient rehabilitation, and disinfection. In the short to medium term, it is expected that robots will increasingly be used for tasks that do not involve significant interaction with physicians, nurses, and patients, such as fetching and carrying materials and medications. In the long term, as software algorithms develop, robot-to-human interaction will become more likely. Previously, it was thought that robots were unsuitable for robot-to-human caregiving due to their lack of human emotions and inferior intelligence. However, advances made in technology allow for the line between robots and humans to become increasingly blurry. Will it be possible that robots replace humans entirely in the healthcare sector? How will the people in need of care feel about this? Will people even realize that they are being treated by a robot? Or will it become impossible to distinguish between humans and robots?
5.3.7 Singularity: Will Technological Growth Become Uncontrollable?
The advances described thus far, while at times futuristic, are generally still developments that most people can wrap their heads around. With the significant advances being made in technology, however, there is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. This is called the singularity. According to Ray Kurzweil, former director of engineering at Google, futurist, and credited for the singularity hypothesis, humans may then be fully replaced by AI or by some hybrid of humans and machines. In his book, Kurzweil writes, “The Singularity will allow us to transcend these limitations of our biological bodies and brains. There will be no distinction, post-singularity, between human and machine.” He further predicts that the singularity will take place by 2045 since this is the earliest point where computerized intelligence could significantly exceed the sum total of human brainpower. Computing power will be so great that it will be impossible for ordinary humans (not augmented by technology) to keep up, and augmentation will be so common that the line between human and machine will be blurred.

This process is expected to go through different stages. In the post-immortal era, Kurzweil predicts that people will be able to extend their lives indefinitely with anti-aging medicines, replacement organs, stem-cell therapy, and medi machines, while neural uploading will lead to the first “immortals.” The ability to transcend biology and enhance oneself endlessly is also likely to lead to what is commonly referred to as the Transhuman Era. This refers to a process where humans will begin transitioning to a higher form of life by replacing or augmenting their physical bodies with synthetic parts. It is unclear to what extent this time horizon is accurate, but advances in technology make the singularity less like a science fiction movie and more like reality. Kurzweil also predicts a period of “Posthumanism.” In this era, humanity is no longer constrained by any physical or biological limitations and exists in various forms that allow it to explore the universe, live in simulated realities, or inhabit otherwise uninhabitable spaces where there is abundant energy to draw from.

+ Elias Sohnle Moreno
Some work is already being done, like the work Neuralink is doing on brain implants and brain-computer interfaces.
Source: link

It is unclear whether, when, and at what rate—whether a gradual process or very sudden—the singularity could happen. The late Stephen Hawking stated that when the singularity is upon humanity, the end of the human race might very well be upon us. While it is perhaps a dark thought, it is almost ironic how humanity has been destroying and wiping out biodiversity only for humans to be wiped out by the very technology they created. Before this dark point in the future is reached, humanity still has the potential to reverse the damage it has done to the Earth, and, hopefully, this will allow for a future where humans, biodiversity, and technology can coexist in harmony.

+ Elias Sohnle Moreno
Stephen Hawking expressed his concerns about humanity's ability to handle a technological singularity, seeing extinction as one of many possible scenarios. "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all". Interestingly, in the same chapter, Stephen Hawking notes: "Intelligence is characterized as the ability to adapt to change. Human intelligence is the result of generations of natural selection of those with the ability to adapt to changed circumstances. We must not fear change. We need to make it work to our advantage."

singularity timeline 
5.4 Digital Ethics - Is There a Morally Right and Wrong in Technology?
This section on digital ethics provides an examination of how technology shapes and will influence our political, social, and moral lives. From a much more different perspective, this topic shows how organizations play a big part in securing millions of personal data and how accountability from industries is essential in preventing damage from stolen data.

This topic emphasizes the ongoing challenges in cyber security, data privacy, and the heightened technological reliance today that creates a whole new medium for cyberattacks to manifest. Emerging technologies shape the future and benefit mankind through automation, but what does it take for automation to overcome bias?

Digital ethics addresses data security trends, the alarming risks present for organizations and in relation to personal matters, and the internal ethical involvement of companies in implementing fairness in their technologies.
5.4.1 Protecting the Digital: Cyber Security
Before you have finished reading this paragraph, somebody in the world has already had their data, computer, or mobile phone successfully hacked; every 39 seconds, there is a new cyberattack happening on the web, and you or your organization could fall victim to one of them.

Malicious hack attempts and cybercrime attacks jumped by 600% during the Covid-19 pandemic. Before the first full quarter of 2021, there had already been over 20.9 million records breached by cybercriminals globally, which alone equates to 677,270 records tampered with per day, and 28,219 records every twenty-four hours; this number was close to reaching the annual total of 30 million data breaches in 2019. The rise of technological advancements and emerging online markets in the modern world simply means that digital assets are growing rapidly. This emphasizes the heightened reliance on digital systems that enable an easier, faster, and more sophisticated flow of information inside an organization or even within one’s personal day-to-day web usage. While the new various technological innovations are relatively useful and make our digital presence a little more sophisticated, they also create a whole new avenue for cyberthreats and cybercriminals to pass through.

The increasing amount of sensitive and confidential data from enterprises and personal usage proportionally increases the cybersecurity measures we have to take. The 3.1 million gap in the cybersecurity workplace, however, is a reminder that an alarming amount of data today is already at risk, and the failure to address this issue fast enough imposes an even greater probability of potential successful cyberattacks in the near future.

Passwords, home addresses, and phone numbers are not the only data cybercriminals are attempting to hack; recent studies found that cybercriminals are also working toward havoc on infrastructures like hospitals, pipelines, meat-packing plants, and even aviation. The majority of these cyberattack attempts are unknown to the public, yet these newer advanced breach attempts can cause massive physical harm to millions, or even billions, of individuals. The healthcare industry is surprisingly the No.1 cyber-attacked industry in the world, and the first reported death by ransomware was last September 2020 when a ransomware attack caused an IT failure at a hospital in Germany. Another attempt included a hacker accessing the water supply with the intent to poison it but failing to do so in Oldsmar, Florida, in February 2021. The military is also not a stranger to cybercriminals; advanced fighter jets such as the F-35 or so-called “flying computer” because of its incredibly advanced system, are more likely to be brought down by a cyberattack than an incoming missile. This is a fast-growing enemy we do not see nor feel yet is increasingly life-threatening at an alarming rate.

From IT professionals to elementary school children, every individual and any industry is susceptible to a cyberattack. Cybercrime-damage costs are expected to reach $6 trillion in 2021. This represents the greatest transfer of economic wealth in history; even more than the global trade of major illegal drugs combined. The consequence of paying for the damages, means organizations have to allocate money from their investments in innovation, sustainability, and prevention to dealing with risks. However, it is not too late for organizations and individuals to tap into the importance of cybersecurity and act proactively to halt successful attempts in cybercrime. Organizations and individuals can proactively act against the risk of stolen data. Companies will have to invest more in prevention against cyberattacks, both monetarily and in the workforce, to increase the chances of protection and enable the success of long-term operations without disruption and loss from hacking risks. Individuals will also have to realize the importance of cyber security, its growing threats, and its growing forms, ranging from well-engineered phishing emails to unknown spyware in personal cameras.

Today, cybersecurity is not just for industry professionals to be aware of but it is also a topic to be communicated to children, teenagers, and every citizen who has a first name.
5.4.2 Data Privacy in the Digital Era
Do you own a smartphone? Ever liked a post or a hilarious meme on your social media account, or have an email account? What about none of those but have had a regular check-up in a healthcare facility or visited a local theme park? These are just some of the few events where a data broker can actively collect, buy, and sell your personal data without your consent.

In 1967, Alan Westin defined privacy in a way that every country in the world now follows; he defined privacy as “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.” It is when an individual spearheads their own personal data and has ultimate control and consent over what or how it is shared with others.

According to a report by New York Times in 2017, a company that a person has never heard of engages in 50 trillion personal data transactions a year, meaning such a company is buying and selling their personal data without any acknowledgment, awareness, and most importantly—consent.

In a world where data is growing exponentially and is becoming both a backbone and a lifeline to many industries and individuals, the question of knowing if and how our personal information is disseminated and secured is just the tip of the iceberg. Everyone who has ever created a social media account, visited a website, or even downloaded a software update has been given a chance to read and agree to a mandatory privacy policy, much more commonly known as the Terms and Conditions. However, to read the privacy policy of the forty most popular websites globally would take thirty working days from end to end. If there are laws, regulations, and steps taken to give an individual a choice to agree to consent, then what exactly happens when an individual disagrees? Whether an individual signs a consent form before a car service takes place or ticks the checkbox on agreeing to have read the Terms and Conditions to download the required software update, all who do not agree to consent are subject to refusal of those necessary services.

In 2009, The Federal Trade Commission (FTC) said, “We all agree that consumers do not read privacy policies.” The FTC is known as the largest privacy regulator; this organization has done more than anyone in the world to regulate privacy policies among companies. Everyone’s current ability to consent is illusory, which means that individuals are left with no choice but to agree, or else such individuals will be denied access to their needs, and wants. Upon making the choice to consent, companies have the legal ability to shift the liability to you. Disguised as a right to consent, it is actually a hidden imposition of a burden for when something goes wrong, and consumers have no choice but to be left out in the cold when this happens.

Now, the question shifts to the relevance and privilege of consent—does having the ability to consent even matter, and is everyone even given the same chance to disagree?

From mobile gaming applications to major technological advancements, consent plays a big role in facilitating these developments. In the future, consent must both be crucial and beneficial to the individual, and not just to companies. People should have the liberty to disagree and still access the same amount of information, services, and rights, without having to compromise their ability to choose. Organizations must become more conscious in emphasizing the value of their customers’ personal data and will have to ensure that the data privacy terms provided are produced ethically and do not take advantage of customers or anyone through any means.

Consent should be meaningful, effective, and communicated between both parties. Most importantly, consent should be provided in a simple language that everyone else understands in a way that is easier to read for all. Agree or Disagree?
5.4.3 Bias in Technology

+ Nadine Kanbier
Some interesting recommendations on this topic worth reading are: 'Weapons of math destruction' - Cathy O'Neil (well known in the subject), 'Coded Bias' (Netflix docu) by Joy Buolamwini, 'Follow Algorithmic Justice League' (they're doing a ton of work on the subject, combining it with art as well).

The first-ever beauty contest with an AI judge was held in 2016. During a long and wide selection of winners for the pageant, the robot jury evaluated about 6,000 applicants, and only one out of the forty-four winners had dark skin.

AI is an extended subset of ML that teaches the computer to operate on its own—its progressive nature of learning, adapting, and mirroring data that are passed on to it carries the response of outputs it constructs. An AI system is simply a set of algorithms that develops further over time from the data that it digests, and when data is produced and acquired from humans, it carries all the biases that we contain, including bias against race, gender, and a lot more.

+ Camera Ford
Safiya Noble, a digital media scholar and Professor of Gender Studies and African American Studies at the University of California Los Angeles (UCLA), wrote a book called “Algorithms of Oppression” about exactly this. She studies how the internet and digital technologies reproduce racial and gender power dynamics. The book shows that search engines are not objective sources of information. Instead, search engine results actually reflect (and are heavily influenced by) social values and economic incentives (for example, advertising) determined by the dominant power structure, leading to the spreading of stereotypes, propaganda, and even political extremism such as white nationalism.
Source: link

Discrimination manifests from the data we use and the inputs we allow computers to consider. Humans have the ability to teach computers about certain attributes that are allowed and ignored during the preparation stage of modeling the AI system. As an example, to model a person’s “creditworthiness,” an attribute could contain a customer’s age, income, or a number of paid-off loans. While modeling an AI system significantly improves the accuracy of automation in those areas, the influence of biases is still unjustified.

Amazon attempted to design an AI tool that eliminated the lengthy process of recruitment and made the selection process even quicker and more efficient through the resumé data they were able to collect over a decade. However, the majority of the resumés collected were from men, and the AI tool eventually learned this algorithmic bias. Amazon then found its recruiting system was dismissing female candidates, even applicants who went to women’s colleges. In the end, this AI tool was never used for several reasons. Facebook, another major tech company, was sued for refusing the advertisement of financial services to elderly and female consumers. On the other hand, facial recognition technologies have ignited controversy for misidentifying women and people of color. Police departments consider facial recognition systems as a major resource for identifying criminals, however, some face analysis algorithms have been shown to misidentify people of color—and the innocent can be robbed of their freedom for a crime they did not do. Deep-learning algorithms in AI greatly impact people’s lives, and the biases hidden under these systems have the ability to sustain injustice in retail, hiring, healthcare, security, and even in the criminal justice system. These algorithms inherit the biases of previous decision-makers and reflect these biases. These algorithms are not discriminatory but they replicate the effects of discrimination.

The mitigation of algorithmic biases does not just happen overnight; these existing systems have processes that challenge the retroactive identification of when and where the biases begin to adapt. Major big tech companies themselves are still in the process of generating unbiased AI systems. However, when these prejudices in the current state of technology continue to exist and progress without change in the future of technological advancements, they then mirror the promotion of inequality.

Technology shapes the future, and the information we have today will shape what the future generations will hold in AI algorithms. The process of inducing new worldly views from a more open and inclusive modern society helps construct a more meaningful AI system that can potentially overlap with biases from past algorithms. Although the process does not have an exact time limit, the future of AI technology, in terms of replacing old biases, is possible and positive if it is embedded with the right, ethical, and fair intelligence.
5.5 Technological Change: What’s heading our way?
The common thread throughout all the advances in technology is that many parts of people’s lives will become partly if not fully digitized. From incorporating AI into daily tasks to personalizing medical care, to developing a more user-centered Internet—technology is evolving in myriad ways. Each of these developments has the potential to make a positive impact on the lives of people or on society as a whole. IoT technology allows people to weave their way through cities and their homes more efficiently, for example. The increased usage of robots allows for increased efficiency and can alleviate some of the future labor shortages. Blockchain can significantly help society by improving trust, transparency, and efficiency while facilitating innovation. Technology is also revolutionizing the healthcare industry. Virtual healthcare has the potential to make healthcare more accessible for people. With personalized healthcare, people can gain more accurate and up-to-date insights into their overall health, allowing for more opportunities to prevent rather than treat a disease. Through gene editing, it might even be possible to eradicate some of the diseases currently dominating the world. However, the nature of technology is such that there is also a danger of causing more harm than good if not designed and implemented carefully and thoughtfully. This reality is reflected in increasing desires for a more open and transparent digital world, more security for private data, and more thoughtful design, which centers human needs and minimizes bias.

+ Lara Hemels
Noema Magazine, specifically their articles related to philosophy and technology (particularly AI), is an additional source to look into. The question of what exactly distinguishes humanity vs technology will become increasingly relevant as AI advances and integrates further into our lives.

+ Elias Sohnle Moreno
Increasingly, the term "human-centered" is replacing "user-centered" when it comes to product development, highlighting a shift in how companies view the consumer. In Web 3.0, every user is also a creator with unique needs and wants, expecting a highly tailored customer experience.



Do you want to access the Bibliography? Click here.

sources