Hausübung

stefan.kappl.uni-linz, 22. März 2012, 15:18

Hier die 7 Ausdrücke die mir bislang nicht bekannt waren:

Ausdruck Name des Videos
Gatekeeper Legimitationsdruck auf das Urheberrecht
Intelligente Netze Revolutionäre Netze durch kollektive Bewegungen
Intermediär Das Web der Zukunft
Memex About Paperless Office Larry Lessig: How Creativity is being strangled by the law
Broadcasting Larry Lessig: How Creativity is being strangled by the law
Network Neutrality Tim Berners-Lee
Adress depletion What is PV6

Gatekeeper

Nearly 60 countries around the world censor Internet communications in some form, but Egypt’s recent complete shutdown of Internet communications was unprecedented.

Should free and open communication—particularly free and open communication via the Internet—be considered an unalienable right? How much control should a government or Internet service provider wield over its citizens’ communications?

This is very much a global issue and, while it’s easy to say that every citizen should have “uncensored access” to the Internet, such a statement is too glib, and here’s why.

If we have learned anything in Internet security from the past 10 years, it’s that a completely open Internet can make it as difficult to communicate safely and effectively as a closed one. The past decade witnessed a meteoric rise of unwanted traffic in the form of spam and cybercrime, made possible through cheap and easy Internet connections. Should spammers engaged in mass-marketing (as well as other more nefarious activities) be able to communicate as freely and easily as Egyptian protestors? Where do we draw that line?

Second, while censorship is prominent in countries like Egypt and China, Americans face more subtle—but equally serious—concerns about the quality of our network access, with issues ranging from network neutrality to competition in access networks. Our government’s decisions affect our Internet access quality and speed. Six years ago the Supreme Court decided Internet service providers (ISPs) were under no obligation to lease their infrastructure to competing carriers. This has effectively created a near-monopoly for Internet access in many regions of the United States and left users either unable to exchange certain types of traffic (such as when Comcast blocked BitTorrent) or with flagging Internet speeds (such as when AT&T delayed its rollout of fiber to the home as part of its U-Verse offering).

Finally, even if citizens can access the Internet, they must also be able to verify information sources. It’s not just whether Facebook, Twitter or YouTube is blocked—it’s whether governments or other organizations are using such sites to spread propaganda.

All of these issues, both at home and abroad, revolve around one question: Who should be the Internet gatekeeper, and what rules should be applied at the gate? I believe the foundations of rights in the digital world rest on two pillars: transparency and choice. First, the actions of ISPs and governments should be transparent; if they take certain actions to restrict, throttle or otherwise manipulate communications or information, users must know about it. Second, users must be able to choose their ISP. If they do not like the performance or policies of a particular ISP, they should have the ability to switch providers.

Transparency is thornier than it appears. Because ISPs do not publicize the way they prioritize different kinds of traffic, we must reverse-engineer these practices with measurement tools. Even notions such as “Internet speed” are complicated and can’t be represented by a single number. Also, different users may be concerned with different performance metrics; gamers might be interested in network service that delivers traffic with the least amount of delay, while those who stream movies may care more about receiving a high quality signal with few errors.

At Georgia Tech we are working with the FCC to give consumers a better sense of whether they’re getting what they are paying for, in terms of ISP performance, and also to educate them on how they might coax better performance out of their home networks.

But in the end, transparency is only helpful if users can choose among Internet service providers. Unfortunately in the United States, users have very little choice. We must reconsider ways to make the ISP market more competitive, perhaps drawing on our own experiences in forcing competition among utility providers.

Though the events in Egypt seem far away, the central questions about information access are quite relevant here at home. Demanding that the ISP policies and behaviors be transparent—and providing users more choice in the ISPs they can use—helps ensure that everyone’s Internet is less vulnerable to the whims of a single gatekeeper.

Nick Feamster is an assistant professor in the School of Computer Science at Georgia Tech. His research focuses on many aspects of computer networking and networked systems, including the design, measurement, and analysis of network routing protocols, network operations and security, and anonymous communication systems. In 2010 he was recognized by Technology Review magazine as one of the world’s top innovators under the age of 35 for his research in computer networks, and he also received a Rising Star Award from the Association for Computing Machinery. Feamster is featured in the March 2011 issue of Discover magazine in a multi-page exploration of tomorrow’s Internet.(http://allthingsd.com/20110211/the-internets-gatekeepers/ 22.03.2012)

 

Inteligente Netze

Smart Grids sind intelligente Netze, weil sie über innovative Informations- und Kommunikationstechnologien verfügen. Bei den Übertragungsnetzen im Hochspannungsbereich ist das bei E.ON bereits gängige Praxis. Ihr Management läuft automatisiert, und das ferngelenkte Steuern großer Kraftwerke ist mittlerweile Routine. Hier werden parallel zum Strom große Datenmengen übertragen und verarbeitet, die zur Steuerung notwendig sind. Es gilt nun, diese Konzepte auch für die Mittel- und Niederspannungsnetze (Verteilnetze) nutzbar zu machen, durch neue Elemente zu ergänzen und dann alle Netzebenen systematisch miteinander zu kombinieren.

E.ON stellt sich dieser Aufgabe. In über 110 Einzelprojekten untersucht der Konzern bereits heute viele Aspekte rund um den Einsatz intelligenter Netztechnik. Der Fokus liegt dabei auf Erkenntnissen zum Lastfluss und dessen Abhängigkeit von Wind, Sonne, Verbraucherverhalten, Stromspeichern (zum Beispiel E-Autos), der Integration in die bestehende Systemlandschaft (Netzleitsysteme) und dem Identifizieren von geeigneten Bauteilen für die Kommunikationstechnik in Trafostationen, Umspannwerken und Netzleitstellen. (http://www.eon.com/de/businessareas/35334.jsp 22.03.2012)

 

Intermediär

Durch das Internet fallen nicht nur klassische Intermediäre und Handelsstufen weg. Daneben ist auch die Entstehung neuer bzw. neuartiger Intermediäre zu beobachten,

  • wenn die Such- und Vergleichskosten für die Suche und Auswahl geeigneter Produkte und Dienstleistungen höher sind als die Kosten für die Inanspruchnahme eines Intermediärs oder
  • bestimmte Leistungen angeboten werden, die der einzelne Nutzer nicht erbringen könnte.

Konkrete Beispiele sind:

  • Suchmaschinen, die dem Nutzer die Suche nach bestimmten Informationen erleichtern wie z.B. Google
  • Portale  wie z.B. AltaVista, die ebenfalls die Suche nach bestimmten Informationen erleichtern
  • Shopping-Malls  wie z.B. My World , die durch die Zusammenfassung verschiedener Online-Shops und Angebote dem Nutzer Suche und Vergleich erleichtern
  • Elektronische Märkte  wie z. B. DCI , die eine elektronische Plattform für das Zusammentreffen von Angebot und Nachfrage zur Verfügung stellen und dadurch dem Kunden helfen, Such-, Vergleichs- und Abwicklungskosten einzusparen
  • Preisagenturen wie z.B. Preis-Ass , die dem Nutzer helfen, das günstigste Produkt zu suchen und auszuwählen
  • Auktionen wie z.B. ebay , die dem Nutzer helfen, das gewünschte Produkt günstiger zu erhalten
  • Virtuelle Communities  wie z.B. die Kultur-Community Metropolis , die interessierten Nutzern eine Plattform für Kommunikation und Informationsaustausch zur Verfügung stellt.(http://www.teialehrbuch.de/Kostenlose-Kurse/eBusiness/12287-Entstehung-neuer-Intermediaere.html 22.03.2012)

Memex

Mr. Bush first wrote of the device he called the memex early in the 1930s. However, it was not until 1945 that his essay "As We May Think" was published in Atlantic Monthly. The frequency with which this article has been cited in hypertext research attests to its importance. In particular, bothDouglas andTed Nelson have acknowledged its pivotal influence. (From Memex to Hypertext contains both a letter from Engelbart to Bush (235) and an homage to Bush by Nelson (245).)

The memex is "a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility" (102). A memex resembled a desk with two pen-ready touch screen monitors and a scanner surface. Within would lie several gigabytes (if not more) of storage space, filled with textual and graphic information, and indexed according to a universal scheme. All of this seems quite visionary for the early 1930s, but Bush himself viewed it as "conventional" (103).

Bush saw the ability to navigate the enormous data store as a more important development than the futuristic hardware. Here he describes building a path to connect information of interest:

When the user is building a trail, he names it, inserts the name in his code book, and taps it out on his keyboard. Before him are the two items to be joined, projected onto adjacent viewing positions. At the bottom of each there are a number of blank code spaces, and a pointer is set to indicate one of these on each item. The user taps a single key, and the items are permanently joined [...]
Thereafter, at any time, when one of these items is in view, the other can be instantly recalled merely by tapping a button below the corresponding code space. Moreover, when numerous items have been thus joined together to form a trail, they can be reviewed in turn, rapidly or slowly, by deflecting a lever like that used for turning the pages of a book. It is exactly as though the physical items had been gathered together from widely separated sources and bound together to form a new book. (103)
(http://www2.iath.virginia.edu/elab/hfl0051.html 22.03.2012)
Broadcasting
People who know me and what I listen to for podcasts and such will say that I’m obsessed with . In a recent post by Pat Flynn where he identified, I pointed out the quality content coming from the 5by5 Studios. I truly did stumble upon this internet broadcasting network founded by Dan and have not been disappointed yet.
The Impact Received

Within just a year the network has gained a strong reputation and many proud sponsors. The numbers kept growing but there were around 300,000 downloads during a given week. Just to show it takes good reputation, quality content, and the right people to contact his most recent show called Back to work with Merlin Mann was downloaded more than 50,000 times in less than 24 hours.(http://www.blogussion.com/expansion/internet-broadcasting/ 22.03.2012)

 

Network Neutrality

When we log onto the Internet, we take lots of things for granted. We assume that we'll be able to access whatever Web site we want, whenever we want to go there. We assume that we can use any feature we like -- watching online video, listening to podcasts, searching, e-mailing and instant messaging -- anytime we choose. We assume that we can attach devices like wireless routers, game controllers or extra hard drives to make our online experience better.

What makes all these assumptions possible is "Network Neutrality," the guiding principle that preserves the free and open Internet. Net Neutrality means that Internet service providers may not discriminate between different kinds of content and applications online. It guarantees a level playing field for all Web sites and Internet technologies. But all that could change.

The biggest cable and telephone companies would like to charge money for smooth access to Web sites, speed to run applications, and permission to plug in devices. These network giants believe they should be able to charge Web site operators, application providers and device manufacturers for the right to use the network. Those who don't make a deal and pay up will experience discrimination: Their sites won't load as quickly, and their applications and devices won't work as well. Without legal protection, consumers could find that a network operator has blocked the Web site of a competitor, or slowed it down so much that it's unusable.

The network owners say they want a "tiered" Internet. If you pay to get in the top tier, your site and your service will run fast. If you don't, you'll be in the slow lane.(http://www.savetheinternet.com/net-neutrality-101 22.03.2012)

 

Adress depletion

At the recent APNIC meeting in New Delhi, the subject of IPv4, IPv6, and transition mechanisms was highlighted in the plenary session [1]. This article briefly summarizes that session and the underlying parameters in IPv4 address depletion and the transition to IPv6.

IPv4 Status

As of September 2007 we have some 18 percent of the unallocated IPv4 address pool remaining with the Internet Assigned Numbers Authority (IANA), and 68 percent has already been allocated to the Regional Internet Registries (RIRs) and through the RIRs to Internet Service Providers (ISPs) and end users. The remaining 14 percent of the IPv4 address space is reserved for private use, multicast, and special purposes. Another way of looking at this situation is that we have exhausted four-fifths of the unallocated address pool in IPv4, and one-fifth remains for future use. It has taken more than two decades of Internet growth to expend this initial four-fifths of the address space, so why shouldn't it take a further decade to consume what remains?

At this point the various predictive models come into play, because the history of the Internet has not been a uniformly steady model. The Internet began in the 1980s very quietly; the first round of explosive growth in demand was in the early 1990s as the Internet was adopted by the academic and research sector. At the time, the address architecture used a model where class A networks (or a /8) were extremely large, the class B networks (/16) were also too large, and the class C networks (/24) were too small for most campuses. The general use of class B address blocks was an uncomfortable compromise between consuming too much address space and consuming too many routing slots through address fragmentation. The subsequent shift to a classless address architecture in the early 1990s significantly reduced the levels of IPv4 address consumption for the next decade. However, over the past five years the demand levels for addresses have been accelerating again. Extensive mass-market broadband deployment, the demand for public non-Network Address Translation (NAT) addresses for applications such as Voice over IP (VoIP), and continuing real cost reductions in technology that has now brought the Internet to large populations in developing economies all contribute to an accelerating IPv4 address consumption rate.

Various approaches to modeling this address consumption predict that the IANA unallocated address pool will be fully depleted sometime in 2010 or 2011 [2, 3, 4, 5].

Transitioning to IPv6

The obvious question is "What then?", and the commonly assumed answer to that question is one that the Internet Engineering Task Force (IETF) started developing almost 15 years ago, namely a shift to use a new version of the Internet Protocol: what we now know as IP Version 6, or IPv6. But if IPv6 really is the answer to this problem of IPv4 unallocated address-pool depletion, then we appear to be leaving the transition process quite late. The uptake of IPv6 in the public Internet remains extremely small as compared to IPv4 [6]. If we really have to have IPv6 universally deployed by the time we fully exhaust the unallocated IPv4 address pools, then this objective appears to be unattainable during the 24 months we have to complete this work. The more likely scenario we face is that we will not have IPv6 fully deployed in the remaining time, implying a need to be more inventive about IPv4 in the coming years, as well as inspecting more closely the reason why IPv6 has failed to excite much reaction on the part of the industry to date.

We need to consider both IPv4 and IPv6 when looking at these problems with transition because of an underlying limitation in technology: IPv6 is not "backward-compatible" with IPv4. An IPv6 host cannot directly communicate with an IPv4 host. The IETF worked on ways to achieve this through intermediaries, such as a protocol to translate NATs [7], but this approach has recently been declared "historic" because of technical and operational difficulties [8]. That decision leaves few alternatives. If a host wants to talk to the IPv4 world, it cannot rely on clever protocol translating intermediaries somewhere, and it needs to have a local IPv4 protocol stack, a local IPv4 address, and a local IPv4 network and IPv4 transit. And to speak to IPv6 hosts, IPv6 has the same set of prerequisites as IPv4. This approach to transition through replication of the entire network protocol infrastructure is termed "Dual Stack." The corollary of Dual Stack is continued demand for IPv4 addresses to address the entire Internet for as long as this transition takes. The apparent contradiction here is that we do not appear to have sufficient IPv4 addresses in the unallocated address pools to sustain this Dual Stack approach to transition for the extended time periods that we anticipate this process to take.

What Can We Expect?

So we can expect that IPv4 addresses will continue to be in demand well beyond any anticipated date of exhaustion of the unallocated address pool, because in the Dual Stack transition environment all new and expanding network deployments need IPv4 service access and addresses. But the address distribution process will no longer be directly managed through address allocation policies after the allocation pool is exhausted.

Ideas that have been aired in address policy forums include encouraging NAT deployment in IPv4, expanding the private use of IPv4 address space to include the last remaining "reserved-for-future-use" address block, various policies relating to rationing the remaining IPv4 address space, increased efforts of address reclamation, the recognition of address transfers, and the use of markets to support address distribution.

Of course the questions here are about how long we need to continue to rely on IPv4, how such new forms of address distribution would affect existing notions of fairness and efficiency of use, and whether this effect would imply escalation of cost or some large-scale effect on the routing system.

On the other hand, is IPv6 really ready to assume the role of the underpinning of the global Internet? One view is that although the transition to a universal deployment of IPv6 is inevitable, numerous immediate concerns have impeded IPv6 adoption, including the lack of backward compatibility and the absence of simple, useful, and scalable translation or transition mechanisms [9]. So far the business case for IPv6 has not been compelling, and it appears to be far easier for ISPs and their customers to continue along the path of IPv4 and NATs.

When we contemplate this transition, we also need to be mindful of what we need to preserve across this transition, including the functions and integrity of the Internet as a service platform, the functions of existing applications, the viability of routing, the capability to sustain continued growth, and the integrity of the network infrastructure.

It appears that what could be useful right now is clear and coherent information about the situation and current choices, and analyzing the implications of various options. When looking at such concerns of significant change, we need to appreciate both the limitations and the strengths of the Internet as a global deregulated industry and we need, above all else, to preserve a single coherent networked outcome. Perhaps this topic is far broader than purely technical, and when we examine it from a perspective that embraces economic considerations, business imperatives, and public policy objectives, we need to understand the broader context in which these processes of change are progressing [10].

It is likely that some disruptive aspects of this transition will affect the entire industry (http://www.cisco.com/web/about/ac123/ac147/archived_issues/ipj_10-3/103_addr-dep.html 22.03.2012)
Viel Spass

0 comments :: Kommentieren