The Internet has become a part of everyday life for everyone. Now it connects us not only through desktops, but also through mobile devices, services, Wi-Fi in public areas, and numerous other programs and applications – from social networking to photo archives. People use the network to buy goods and services, realize cash transactions, apply to the authorities, and to meet other essential needs. In addition to being means of communication, the Internet is also a powerful political weapon which could be used for good as well as for worse. The wave of protests and coups in the Middle East, known as the “Arab Spring” was directly linked to the impact of these technologies collectively known as Web 2.0 or Internet diplomacy. In a broader context, such processes can be called cyber-geopolitics, as they have a planetary character and include potential for conflict.
Edward Snowden’s revelations significantly affected the conjuncture of international relations and security in cyberspace, to which so many states immediately reacted. After this, the US tried without any delay to turn on “information overload”, accusing Beijing of cyber espionage. In turn, this case reminded Washington of the “Five Eyes”, the intelligence alliance under the leadership of the White House, which carries out surveillance on all citizens (if they use the Internet or any mobile technology).
If the main trends are considered, then there is a major change in the field of Internet policy in line with which states are going to be divided into two camps. The first one is the global West, which insists on the Internet being universal (but under the control of the US as the inventor of the worldwide web). The opposed group of countries will try to defend their sovereignty, including in the Internet space.
This opposition is already on the global political agenda. In recent years, a number of countries have held considerable debates and changes on the legislative level directly related to cyberspace.
These processes are complex and often not very intuitive phenomenon for traditional political science and classical geopolitics. The problem is that some of the issues related to cyberspace are the inheritance of highly specialized experts. An adequate understanding of them requires a multidisciplinary approach, since lawyers alone would not be able to understand cyberspace in detail without the help of engineers and programmers, while policy makers should understand not only the interests of consumers in new opportunities, but also the technical and economic aspects of cyberspace. Therefore, it is necessary to pay attention not only to the political and economic aspects, but also to analyze the ideological, social, and military levels, i.e. some elements of the geopolitical structure of any state or alliance.
Ideology and cyberspace
Philosophical concepts stand behind every political project or theory, and in the case of the Internet there are a number of ideas that have influenced the creation and development of the network.
Dutch researcher Paul Treanor thinks that the core network model originated from classical liberalism. In some way, it is an electronic free market. The appearance of a particular ideology, Net-ism, is based on an aggressive promotion of political lobbying and the Internet. To such lobbyists Paul Treanor attributes the Electronic Frontier Foundation, the group of Martin Bangemann which formulated the information strategy for the European Council. Treanor considers “Cyberspace and the American Dream: A Magna Carta for the Knowledge Age” by futurist Alvin Toffler and “People and Society in Cyberspace” by George Keyworth to be introductory documents to cyber-liberal ideology.
In his article from 1996, he wrote that the Internet itself was a mechanism of discrimination: it was available only for 2% of the world population who had enough money, access to terminals, and the ability to use a computer and knowledge of the English language (at least in the Roman script). “Net-ism is wrong because it is coercively expansionist. There is no inherent or inevitable technical or historical trend to a single communication network. On the contrary: never before in history, have so many separate networks been technically possible. Linking all networks together is a conscious choice by some people, a choice then imposed on others. The logic is identical to that of colonial governments, which forced peasants into the agricultural market, by imposing cash taxes. (To pay the tax, the peasants had to sell cash crops such as sugar). This logic says in effect: ‘no one is free to stay outside the free market’. Today, not just governments, but business, social movements, intellectuals and artists, all want to impose the Net. This broad movement is obviously more than profit-seeking (and a non-profit Net would also be wrong). It is an ideological movement seeking ideological imposition. That imposition itself, the universalism, the expansionism, their involuntary nature, the basic unfreedom to exit – that is what makes liberal structures wrong. That applies to the free market, and it applies inherently to the Internet”.
Liberals consider ideas and opinions as exchanges of objects. If a liberal has an opinion, he or she wants to “express it” and share it with others. The priority of dialogue and communication in neoliberal theories (for example, communicative ethics) runs parallel to the priority of market exchange in classical liberalism (in the sense of communicative ethics and the ethics of dialogue already established the political and ethical framework for cyberspace).
An information society is a liberal society of cyber-exchange. Any citizen reports, receives and transmits the flow of ideas and opinions as a kind of Nick Leeson for communication. Of course, it is true that only the Internet (or something similar) could make this possible, but it does not make this information society morally or politically correct.
In general, the West could be said to have two ideological trends which in their own way interpret cyberspace, its function and operation methods. These are cyber-liberalism and cyber-realism.
Over the past thirty years, both narratives have coexisted in tense relationships, gaining approval in particular cases as a result of historical events such as the Gulf War in 1991 or the terrorist attacks of September 11, 2001. Technical web developers and their fellow scientists are telling us a neoliberal story, while the global community and military strategists hold the position of neo-realism. Both versions recognize cyberspace as a new type of territory with unique challenges and benefits for its participants. However, two areas differ in three main areas:
– In their understanding of the agent’s structure and how he (or she) refers to cyberspace;
– Analyzing the probability of regulatory action in cyberspace;
– Understanding whether this is uncontrolled cyberspace or simply unclaimed territory.
Moreover, both these directions consider cyberspace to be a territory of anarchy, but interpret the meaning and sense of anarchy in many ways.
Over time, both these ideologies have become more fleshed out, offering clear (and different) views on subjects of territoriality, nationality, and the role of information in the category of cyber-power. The realist position in particular has become more consistent as a result of events related to the first Gulf war in 1991, and the events of September 11. In addition, as the economic functions of cyberspace have changed, the concept of territoriality and nationality are also specified.
From DIY to techno-realism
Francisco Millarch proposed the following chronology for the ideas associated with online space and related technologies:
1. DIY culture and pure nerdism (1976 – 1984)
At this early stage, personal computing was a hobby for most. Enthusiasts assembled their own machines, programmed their own code, and exchanged their experiences with their peers at homebrew computer clubs. This was also when the first companies in the PC industry started, such as Altair, Apple and Microsoft, but with products focused on the niche market of nerds and techies.
2. Real-life applications and machines (1984 – 1990)
With the launch of the Apple Mac in 1984, non-techies found their way into the benefits of information technology. Graphical user interfaces (GUI) and applications such as word processors and spreadsheets initiated a swift from an exclusively nerdy culture to a results-oriented use of the personal computer. Even the text-based and harder to use IBM PC platform got its adepts in the office marketplace.
3. Windows embracing the “rest of us” (1990 -1993)
In 1990 the first working version of Microsoft Windows was released, emulating the success of the Mac GUI six years previously. Although Apple created “the computer for the rest of us” motto, in 1984, it was Microsoft that profited the most. Through a series of strategic mistakes (proprietary technology, no license agreements, higher prices than competition, and a strict bundling policy of hardware and operational system), Apple lost its enormous PC market share of the early days of the Apple IIs. It was Microsoft, with its strategy of “embracing and extending”, who actually took over “the rest of us” at a time when hardware prices dropped to acceptable levels for most middle-class households in developed countries. This was the foundation of its actual 94.1% of the Graphical OS market share and domination of 85% of the office appliances industry.
Nonetheless, those were the golden years when the personal computing industry formed a critical mass of users. Allied to the convergence of telecommunications and media industries, this epoch built the foundations of techno-utopians ideals.
4. Net Utopia and cyber-liberalism: the Wired era (1993 -1998)
And suddenly, by giving away to the private sector, the public Internet backbone, as a result of over 30 years of investment with US tax-payer funds, the American government turns the academic and military network into the “information marketplace”, the new business frontier for any post-industrialist CEO.
5. Technorealism (1998 – ).
This movement was created by a small group of intellectuals led by Andrew Shapiro, David Shank and Stephen Johnson. They published a manifesto, consisting of eight points.
1. Technologies are not neutral.
A great misconception of our time is the idea that technologies are completely free of bias – that because they are inanimate artifacts, they don’t promote certain kinds of behaviors over others. In truth, technologies come loaded with both intended and unintended social, political, and economic leanings. Every tool provides its users with a particular manner of seeing the world and specific ways of interacting with others. It is important for each of us to consider the biases of various technologies and to seek out those that reflect our values and aspirations.
2. The Internet is revolutionary, but not Utopian.
The Net is an extraordinary communications tool that provides a range of new opportunities for people, communities, businesses, and government. Yet as cyberspace becomes more populated, it increasingly resembles society at large, in all its complexity. For every empowering or enlightening aspect of the wired life, there will also be dimensions that are malicious, perverse, or rather ordinary.
3. Government has an important role to play on the electronic frontier.
Contrary to some claims, cyberspace is not formally a place or jurisdiction separate from Earth. While governments should respect the rules and customs that have arisen in cyberspace, and should not stifle this new world with inefficient regulation or censorship, it is foolish to say that the public has no sovereignty over what an errant citizen or fraudulent corporation does online. As the representative of the people and the guardian of democratic values, the state has the right and responsibility to help integrate cyberspace and conventional society.
Technology standards and privacy issues, for example, are too important to be entrusted to the marketplace alone. Competing software firms have little interest in preserving the open standards that are essential to a fully functioning interactive network. Markets encourage innovation, but they do not necessarily insure the public interest.
4. Information is not knowledge.
All around us, information is moving faster and becoming cheaper to acquire, and the benefits are manifest. That said, the proliferation of data is also a serious challenge, requiring new measures of human discipline and skepticism. We must not confuse the thrill of acquiring or distributing information quickly with the more daunting task of converting it into knowledge and wisdom. Regardless of how advanced our computers become, we should never use them as a substitute for our own basic cognitive skills of awareness, perception, reasoning, and judgment.
5. Wiring the schools will not save them.
The problems with America’s public schools — disparate funding, social promotion, bloated class size, crumbling infrastructure, lack of standards — have almost nothing to do with technology. Consequently, no amount of technology will lead to the educational revolution prophesied by President Clinton and others. The art of teaching cannot be replicated by computers, the Net, or by “distance learning.” These tools can, of course, augment an already high-quality educational experience. But to rely on them as any sort of panacea would be a costly mistake.
6. Information wants to be protected.
It’s true that cyberspace and other recent developments are challenging our copyright laws and frameworks for protecting intellectual property. The answer, though, is not to scrap existing statutes and principles. Instead, we must update old laws and interpretations so that information receives roughly the same protection it did in the context of old media. The goal is the same: to give authors sufficient control over their work so that they have an incentive to create, while maintaining the right of the public to make fair use of that information. In neither context does information want “to be free.” Rather, it needs to be protected.
7. The public owns the airwaves; the public should benefit from their use.
The recent digital spectrum giveaway to broadcasters underscores the corrupt and inefficient misuse of public resources in the arena of technology. The citizenry should benefit and profit from the use of public frequencies, and should retain a portion of the spectrum for educational, cultural, and public access uses. We should demand more for private use of public property.
8. Understanding technology should be an essential component of global citizenship.
In a world driven by the flow of information, the interfaces – and the underlying code – that make information visible are becoming enormously powerful social forces. Understanding their strengths and limitations, and even participating in the creation of better tools, should be an important part of being an involved citizen. These tools affect our lives as much as laws do, and we should subject them to a similar democratic scrutiny.
Another ideological direction is connectionism. Canadian scientist David T. Jones, considering the theory of “connectivism”, determined how to relate network and knowledge to each other. For this, he identifies three major types of knowledge:
– qualitative – i.e., knowledge of properties, relations, and other typically sensible features of entities
– quantitative – i.e., knowledge of number, area, mass, and other features derived by means of discernment or division of entities within sensory perception
– connective – i.e., knowledge of patterns, systems, ecologies, and other features that arise from the recognition of interactions of these entities with each other.
There is an increasing effect of context-sensitivity across these three types of knowledge. Sensory information is in the first instance context-independent as raw sense data, but as we begin to discern and name properties, context-sensitivity increases. As we begin to discern entities in order to count them, context-sensitivity increases further. Connective knowledge is the most context-sensitive of all, as it arises only after the perceiver has learned to detect patterns in the input data.
David T. Jones notes that the state we call ‘knowledge’ is produced in (complex) entities as a consequence of the connections between and interactions among the parts of that entity.
One proposition of connectivism (which can be called ‘strong connectivism’) is that ‘knowledge’ is what connections are created solely as a result of the common connection-forming mechanisms, and not as a result of the particular physical constitution of the system involved. Weak connectivism, by contrast, will allow that the physical properties of the entities create connections, and hence knowledge, unique to those entities. Most people support both strong and weak connectivism.
More generally this distinction could be characterized under the heading of ‘groups’ and ‘networks’. In this line of argument, groups are defined compositionally – sameness of purpose, sameness of type of entity, etc., while networks are defined in terms of the interactions. This distinction between groups and networks has led me to identify four major methodological principles”.
– autonomy – each entity in a network governs itself
– diversity – entities in a network can have distinct, unique states
– openness – membership in the network is fluid; the network receives external input
– interactivity – ‘knowledge’ in the network is derived through a process of interactivity, rather than through a process of propagating the properties of one entity to other entities.
A question arises here: do these processes create reliable networks that form the bond system? Network reliability is only possible when there is a mechanism to prevent ‘network death’. Network death occurs when all entities are of the same state, and hence all interaction between them has either stopped or entered into a static state. Network death is the typical result of what are called ‘cascade phenomena’, whereby a process of spreading activation eliminates diversity in the network. The four principles are mechanisms that govern or regulate spreading activation.
George Siemens considers connectivism to be the integration of principles explored by chaos, network, and complexity and self-organization theories. Learning is a process that occurs within nebulous environments of shifting core elements – not entirely under the control of the individual. Learning (defined as actionable knowledge) can reside outside of ourselves (within an organization or a database), is focused on connecting specialized information sets, and the connections that enable us to learn more are more important than our current state of knowing.
Connectivism is driven by the understanding that decisions are based on rapidly altering foundations. New information is continually being acquired. The ability to draw distinctions between important and unimportant information is vital. The ability to recognize when new information alters the landscape based on decisions made yesterday is also critical.
Principles of connectivism:
Learning and knowledge rests in diversity of opinions.
Learning is a process of connecting specialized nodes or information sources.
Learning may reside in non-human appliances.
Capacity to know more is more critical than what is currently known.
Nurturing and maintaining connections is needed to facilitate continual learning.
Ability to see connections between fields, ideas, and concepts is a core skill.
Currency (accurate, up-to-date knowledge) is the intent of all connectivist learning activities.
Decision-making is itself a learning process. Choosing what to learn and the meaning of incoming information is seen through the lens of a shifting reality. While there is a right answer now, it may be wrong tomorrow due to alterations in the information climate affecting the decision.
Connectionism, in fact, is a flexible, adaptive tool, with the help of which a wide variety of network theories can be realized. But this became possible only in the era of global Internet connectivity.
And what is next? As a rule, Western researchers continue to develop the topic of cyber-space, cyber-liberalism, and cyber-realism while trying to ignore another important event – the dotcom bubble burst in 2001. This economic bubble has been in existence since 1995. Traditionally, the story is described as follows. It was formed as a result of the take-off of shares of Internet companies (mostly American), as well as the appearance of a large number of new Internet companies and the reorientation of the oldest companies in the Internet business in the late 20th century. Shares of companies who were encouraging the use of the Internet to generate income fabulously soared. Such high prices justified many commentators and economists who claimed that a “new way of economy” occurred. But in fact, these new business models were ineffective and the money spent mainly on advertising and large loans led to a wave of bankruptcies, the strong fall of the NASDAQ index, as well as the collapse of prices of server computers.
But what was behind this idea of free-market and liberalism was deliberately glossed over. Western experts prefer to speak of cyberspace in connection with new forms of conflict, changes in forms of sovereignty, breakthrough technologies, and so forth, as if a new bubble could never happen again. But the proliferation of mobile devices and a variety of gadgets suggests to the contrary. Where is the guarantee that the hand of the free market will not once again submerge developed countries into a new financial, economic, political deadlock? In other words, although cyber-liberals continue to actively defend their ideas and prove the advantages of their ideology, there are clear signs that their strategy is unacceptable and can lead to disaster on a national and even international level if such a technique is adopted and adapted to the state level.
Paul Starr is a professor of sociology and public affairs at Princeton University, the co-editor (with Robert Kuttner) and co-founder (with Robert Kuttner and Robert Reich) of The American Prospect, a liberal magazine. He is considered to be one of the ideologues of cyber space. In his article “Of Our Time: Cyberpower and Freedom”, which is included in the annals of cyber-liberalism, Starr wrote: “Cyberspace is most singularly the product of political invention and social agreement, and only law will give us the security to use it freely.”
In the previously cited article “Cyberspace and the American Dream: A Magna Carta for the Knowledge Age”, Esther Dyson, George Gilder, George Keyworth and Alvin Toffler said that “the Third Wave, and the Knowledge Age it has opened, will not deliver on its potential unless it adds social and political dominance to its accelerating technological and economic strength. This means repealing Second Wave laws and retiring Second Wave attitudes. It also gives to leaders of the advanced democracies a special responsibility – to facilitate, hasten, and explain the transition.
As humankind explores this new “electronic frontier” of knowledge, it must confront again the most profound questions of how to organize itself for the common good. The meaning of freedom, structures of self-government, definition of property, nature of competition, conditions for cooperation, sense of community and nature of progress will each be redefined for the Knowledge Age – just as they were redefined for a new age of industry some 250 years ago.”
At the end of their doctrinal work on cyber-liberalism, Toffler, Keyworth and their colleagues reveal the true purpose of their intentions. “There are key themes on which this constituency-to-come can agree. To start with, liberation – from Second Wave rules, regulations, taxes and laws laid in place to serve the smokestack barons and bureaucrats of the past. Next, of course, must come the creation – creation of a new civilization, founded in the eternal truths of the American Idea.”
The ideologists of this new direction associated with emerging cyberspace base themselves on their ideological liberal predecessors. Quotes of libertarian ideas can often be found in their works, such as those of Ayn Rand, and mentions of “the Frontier” take us back to the era of the creation of the doctrine of Manifest Destiny, when US intellectuals justified their historic mission of Divine Providence.
Thus, the trajectory of cyber-libertarianism has appeared which calls online space a new world where individual freedom, spirit of enterprise, and free creativity must prevail. But the foundations have been laid for this by American domination.