Affiliate (commerce)
From Wikipedia, the free encyclopedia

An affiliate is a commercial entity with a relationship with a peer or a larger entity.

Corporate structure

A corporation may be referred to as an affiliate of another when it is related to it but not strictly controlled by it, as with a subsidiary relationship, or when it is desired to avoid the appearance of control. This is sometimes seen with companies that need to avoid restrictive laws (or negative public opinion) on foreign ownership.

For the concept as exercised in the North American broadcasting industry see network affiliate.

Electronic commerce

Affiliate marketing typically refers to an electronic commerce version of the traditional agent/referral fee sales channel concept. An e-commerce affiliate is a website which links back to an e-commerce site such as Amazon.com with the goal of making a commission for referred sales.

However, as e-commerce continues to evolve, e-commerce affiliates are no longer restricted to website owners. Bloggers and members of different online community forums can be affiliates as well. Many emerging affiliate programs are now accepting bloggers and individuals, not necessarily webmasters, to be affiliates.

Affiliates can also be referred to as publishers. Affiliate marketers don't necessarily have to be affiliate marketers specifically. Sometimes such marketers can be the e-commerce web site that actually sells the products and services. The advantage of this method of marketing is that it cuts out the middleman but it does require the affiliates to have a high degree of trust in the software and people behind the e-commerce web site in question.

Network affiliate
From Wikipedia, the free encyclopedia

In the broadcasting industry (especially in North America), a network affiliate (or affiliated station) is a local broadcaster which carries some or all of the programme line-up of a television or radio network, but is owned by a company other than the owner of the network. This distinguishes such a station from an owned-and-operated station (O&O), which is owned by its parent network.

In the United States, Federal Communications Commission (FCC) regulations limit the number of network-owned stations as a percentage of total market size. As such, networks tend to have O&Os only in the largest media markets (eg. New York City and Los Angeles), and rely on affiliates to carry their programming in other markets. However, even the largest markets may have network affiliates in lieu of O&Os. For instance, Tribune Broadcasting's WPIX serves as the New York City affiliate for the CW Television Network, which does not have an O&O in that market. On the other hand, several other TV stations in the same market — WABC (ABC), WCBS (CBS), WNBC (NBC), WNYW (Fox) and WWOR-TV (MyNetworkTV) — are O&Os.

In Canada, the Canadian Radio-Television and Telecommunications Commission (CRTC) has significantly more lenient rules regarding media ownership. As such, most television stations, regardless of market size, are now O&Os of their respective networks, with only a few true affiliates remaining. The Canadian Broadcasting Corporation originally relied on a large number of privately-owned affiliates to disseminate its radio and television programming. However, since the 1960s, most of the CBC Television affiliates have been replaced by network owned and operated stations or retransmitters. CBC Radio stations are now entirely O&O.

While network-owned stations will normally carry the full programming schedule of the originating network, an affiliate is independently-owned and typically under no obligation to do so. Affiliated stations often buy supplementary programming from another source, such as a syndicator or another television network which does not have coverage in the station's broadcast area, in addition to the programming they carry from their primary network affiliation.

Dual affiliations

In some smaller markets in the United States, a station may even be simultaneously listed as an affiliate of two networks. A station which has a dual affiliation is typically expected to air all or most of both networks' core prime time schedules — although programming from a station's secondary affiliation normally airs outside of its usual network time slot, and some less popular programs may simply be left off a station's schedule. Dual affiliations are most commonly associated with the smaller American television networks, such as MyNetworkTV and The CW, which air fewer hours of prime time programming than the "Big Four" networks and can thus be more easily combined into a single schedule, although historically the "Big Four" have had some dual-affiliate stations in small markets as well.

Further, with the ability of digital television stations to offer a distinct programming stream on a digital subchannel, traditional dual affiliation arrangements in which programming from two networks is combined into a single schedule are becoming more rare.

In Canada, affiliated stations may acquire broadcast rights to programs from a network other than their primary affiliation, but as such an agreement pertains only to a few specific programs, chosen individually, they are not normally considered to be affiliated with the second network.

Keyword Country Reviewed

Better Than Other Keyword Tools?

After trying a lot of keyword research tools in the market, I came across Keyword Country…  and it would be safe to say that Keyword Country is the most comprehensive and most complete keyword research tool I have ever seen.

Keyword Country connects to major search engines for keyword research (including Google, Yahoo, MSN, ASK) and also connects to the industry by performing a deep analysis of the competitor websites. It also steals all the niche keywords your industry happens to be focusing on.

   

Keyword Country claims that a keyword research done through it’s software can yield up to 230% more traffic than any other keyword research tool available. In this product review, we   will find out how far it lives up to such claims.

Basic Keyword Research (Network Search):

Just type in a keyword and Keyword Country will fetch you a huge list of keywords, which  will be sorted by profitability along with other alternate keywords that your audience is searching for.

 

This mode of keyword research is database powered. Keyword Country claims to be the world's biggest keyword  database and displays useful information like:

            
  • Google Searches
  • Yahoo Search Volume
  • Google Competition (inanchor intitle)
  • Yahoo Competition
  • Advertisers on Google
  • Yahoo Advertisers
  • Google Max CPC
  • MSN Search Volume
  • Google Clicks
  • MSN Competition
  • AdSense EPC (estimated)
  • MSN Advertisers
  • R/S Ratio (Google, Yahoo, MSN)
  • KEI Ratio (Google, Yahoo, MSN)
 

And much more….

Since the data is too much to look at, an advanced search can help you can narrow down the huge keyword lists to get the exact keywords you want.

 

Watch Network Search Video >>

 

Besides keywords and their data, the related keywords list is quite useful and you often get new ideas for niches to target and content ideas that can help you tap into the audience that you are trying to target.

Realtime Keyword Research:

This is another mode of keyword research and is even more powerful than network keyword  research. It not only extracts keywords from major search engines like Google, Yahoo, MSN and Ask but also gets keywords from KC's Network Search and, most importantly, it reverse  engineers any industry's top 100 high ranking websites for each keyword in the list. It also extracts niche keywords that those websites are targeting to rank high and tap traffic out of  search engines.

This type of keyword research easily finds new born keywords that industry is using actively but that free keyword tools like those offered by Google, Yahoo and MSN will take around  2-6 months to show and will only show if the keyword is able to make to their top 100 or 150 most searched keyword lists.

    Realtime Keyword Research:
  • Blocks junk keywords
  • Filters duplicate keywords
  • Keeps the keyword list laser targeted
  • Clusters the keywords in ready-to-go format
  •  
  • Gives you a great amount of control on your keyword research

Watch Realtime Search Video >> 

Stealing Competitor's Profitable Keywords - Site Analysis

 

This is a very powerful tool and I couldn’t find anything similar to this tool anywhere on the web that comes even close to KC’s Site Analysis. It lets you practically reverse engineer the  SEO strategies and PPC campaigns of your competitors. It can detect the search engine keywords (paid or non-paid) that are responsible for up to 96% of traffic to any given website. 

Just load up your competitor list in Site Analysis and Keyword Country will rip all the keywords that your competitors are using on their pages, along with the HTML tag details in  which they can be found.

    For SEO:
  • Finds keywords that are common between all top rankers
  • Finds the LSI matches they are using
  •  
  • Finds Low competition / high traffic keywords
  • Details of other niches that they are focusing on
    For PPC Advertising: 
  • Analyze landing pages and extract profitable keywords out of them
  • Profitable Adwords keywords that they have found profitable after spending thousands of dollars on market research
  •  

Site analysis even lets you analyze your own content before making it public to search engines to ensure that the keywords for which you are optimizing your content are actually  there in your content.

Watch Site Analysis Video >>

 

Semantic Keyword Research / Related Search Terms:

This keyword research lists all the LSI terms or semantically related keywords of a given  keyword. These are the keywords that you need to include in your web pages to naturally rank higher on search engines. Keyword results come with ‘Search Volume Estimates’,  which make it easier for you to judge the keywords to go for. Additionally, it also allows you to further dig your keyword deeper in Network Search or Realtime Search.

Watch Site Analysis Video >>

Keywords Research In 32+ Languages:

 

Keyword Country is the only keyword engine that gets you keywords in French, Spanish,Dutch, German and 32 other languages (and does it the right way). Foreign language  support options enhance your keyword digging reach and allow you to generate your keyword list in languages other than English

This is ideal when your audience speaks multiple languages and lets you target non-English speaking regions.

Misspellings Keyword Research:

Misspellings are the keywords that are cheaper to buy for Adwords campaigns and less competitive to achieve top search engine rankings. Keyword Country covers more than 8  sources of misspellings including mobile search misspellings and popular misspellings of English and 31 other languages. Here you can find many keywords that have less  competition, considerable traffic, are cheaper to buy and easier to target.

Watch Misspellings Video >> 

Niche Keywords Brainstorming With Keyword Map

Keyword Map sorts the keywords in a directory structure that helps you find the most  popular niches that are driving traffic to your industry. It helps you uncover popular niches that are related to your main keyword but you are not targeting right now. A good amount of  traffic flows towards these niches and the level of that increases every single month. It gives you excellent ideas on which you can build content and increase your website traffic and  helps you discover many new opportunities. It is ideal when you are looking for new niches to compete in

Watch Keyword Map Video >> 

24x7 Customer Service

The customer service offered by Keyword Country is excellent. I can personally recommend it given the fact that I had some questions to ask and all were answered fully and in simple language. I accessed the Live Chat option but there were several ways in which I could have contacted Keyword Country to find out exactly what I needed to know quickly and efficiently. I didn’t have to waste any time before using the services they offered and increasing my income. If only all providers of software services like this one went above and beyond the call of duty as much! I am most definitely satisfied with the high level of service and would heartily recommend it.

Summary:

So what did I end up with after using Keyword Country?

 

Strengths:

  • Largest keyword database
  • Digs deeper into industry for keywords
  • Supports keyword research in 32+ languages
  •  
  • Shows lots of useful data for each keyword
  • Steals competitors' profitable keywords (legally!)
  • LSI / Semantic keyword research
  • Misspelling keyword research
  •  
  • Exports keywords in ready to go format
  • Excellent brainstorming tools

Weak Aspects:

 
  • Owing to detailed keyword research, it may become slow at times
  • Your computer IP may be temporarily banned by some online keyword tools for heavy web crawling
  •  

Keyword Country is the most complete keyword tool in the market and I feel that this tool is well worth the investment. I can honestly say that Keyword Country has quickly become my new first choice tool for researching keyword phrases. I now use Keyword Country and my keyword research ability has never, ever been so strong. I have also used several other programs but the Keyword Country is what I would recommend to anyone. This tool is dynamite!

Get Access to Keyword Country Watch Videos of Keyword Country   Compare Keyword Country with other Keyword Tools

Wireless access point

Written by SEPTA MUNARDI on Rabu, 26 Agustus 2009 at 08.19

From Wikipedia, the free encyclopedia

In computer networking, a wireless access point (WAP) is a device that allows wireless communication devices to connect to a wireless network using Wi-Fi, Bluetooth or related standards. The WAP usually connects to a wired network, and can relay data between the wireless devices (such as computers or printers) and wired devices on the network.

In Industrial wireless networking, the design is rugged with a metal cover, a Din-Rail mount, and a wider temperature range during operations, high humidity and exposure to water, dust, and oil. Wireless security includes: WPA-PSK, WPA2, IEEE 802.1x/RADIUS, WDS, WEP, TKIP, and CCMP(AES) encryption. Different from the computer consumer market, industrial wireless access point can also be used as a bridge, router, or a client.

Introduction

Prior to wireless networks, setting up a computer network in a business, home, or school often required running many cables through walls and ceilings in order to deliver network access to all of the network-enabled devices in the building. With the advent of the Wireless Access Point, network users are now able to add devices that access the network with few or NO cables. Today's WAPs are built to support a standard for sending and receiving data using radio frequencies rather than cabling. Those standards, and the frequencies they use are defined by the IEEE. Most WAPs use IEEE 802.11 standards.

Common WAP Applications

A typical corporate use involves attaching several WAPs to a wired network and then providing wireless access to the office LAN. Within the range of the WAPs, the wireless end user has a full network connection with the benefit of mobility. In this instance, the WAP functions as a gateway for clients to access the wired network.

A Hot Spot is a common public application of WAPs, where wireless clients can connect to the Internet without regard for the particular networks to which they have attached for the moment. The concept has become common in large cities, where a combination of coffeehouses, libraries, as well as privately owned open access points, allow clients to stay more or less continuously connected to the Internet, while moving around. A collection of connected Hot Spots can be referred to as a lily-pad network.

The majority of WAPs are used in Home wireless networks.[citation needed] Home networks generally have only one WAP to connect all the computers in a home. Most are wireless routers, meaning converged devices that include the WAP, a router, and, often, an ethernet switch. Many also converge a broadband modem. In places where most homes have their own WAP within range of the neighbors' WAP, it's possible for technically savvy people to turn off their encryption and set up a wireless community network, creating an intra-city communication network without the need of wired networks.

A WAP may also act as the network's arbitrator, negotiating when each nearby client device can transmit. However, the vast majority of currently installed IEEE 802.11 networks do not implement this, using a distributed pseudo-random algorithm called CSMA/CA instead.

Wireless Access Point vs. Ad-Hoc Network

Some people confuse Wireless Access Points with Wireless Ad-Hoc networks. An Ad-Hoc network uses a connection between two or more devices without using an access point: the devices communicate directly. An Ad-Hoc network is used in situations such as a quick data exchange or a multiplayer LAN game because it is easy to set up and does not require an access point. Due to its peer-to-peer layout, Ad-Hoc connections are similar to Bluetooth ones and are generally not recommended for a permanent installation.

Internet access via Ad-Hoc networks, using features like Windows' Internet Connection Sharing, may work well with a small number of devices that are close to each other, but Ad-Hoc networks don't scale well. Internet traffic will converge to the nodes with direct internet connection, potentially congesting these nodes. For internet-enabled nodes, Access Points have a clear advantage, being designed to handle this load.

Limitations

One IEEE 802.11 WAP can typically communicate with 30 client systems located within a radius of 10000 m.[citation needed] However, the actual range of communication can vary significantly, depending on such variables as indoor or outdoor placement, height above ground, nearby obstructions, other electronic devices that might actively interfere with the signal by broadcasting on the same frequency, type of antenna, the current weather, operating radio frequency, and the power output of devices. Network designers can extend the range of WAPs through the use of repeaters and reflectors, which can bounce or amplify radio signals that ordinarily would go un-received. In experimental conditions, wireless networking has operated over distances of several kilometers.

Most jurisdictions have only a limited number of frequencies legally available for use by wireless networks. Usually, adjacent WAPs will use different frequencies to communicate with their clients in order to avoid interference between the two nearby systems. Wireless devices can "listen" for data traffic on other frequencies, and can rapidly switch from one frequency to another to achieve better reception. However, the limited number of frequencies becomes problematic in crowded downtown areas with tall buildings using multiple WAPs. In such an environment, signal overlap becomes an issue causing interference, which results in signal dropage and data errors.

Wireless networking lags behind wired networking in terms of increasing bandwidth and throughput. While (as of 2004) typical wireless devices for the consumer market can reach speeds of 11 Mbit/s (megabits per second) (IEEE 802.11b) or 54 Mbit/s (IEEE 802.11a, IEEE 802.11g), wired hardware of similar cost reaches 1000 Mbit/s (Gigabit Ethernet). One impediment to increasing the speed of wireless communications comes from Wi-Fi's use of a shared communications medium, so a WAP is only able to use somewhat less than half the actual over-the-air rate for data throughput. Thus a typical 54 MBit/s wireless connection actually carries TCP/IP data at 20 to 25 Mbit/s. Users of legacy wired networks expect faster speeds, and people using wireless connections keenly want to see the wireless networks catch up.

As of 2007 a new standard for wireless, 802.11n is awaiting final certification from IEEE. This new standard operates at speeds up to 540 Mbit/s and at longer distances (~50 m) than 802.11g. Use of legacy wired networks (especially in consumer applications) is expected[by whom?] to decline sharply as the common 100 Mbit/s speed is surpassed and users no longer need to worry about running wires to attain high bandwidth.

By the year 2008 draft 802.11n based access points and client devices have already taken a fair share of the market place but with inherent problems integrating products from different vendors.

Security

Wireless access has special security considerations. Many wired networks base the security on physical access control, trusting all the users on the local network, but if wireless access points are connected to the network, anyone on the street or in the neighboring office could connect.

The most common solution is wireless traffic encryption. Modern access points come with built-in encryption. The first generation encryption scheme WEP proved easy to crack; the second and third generation schemes, WPA and WPA2, are considered secure if a strong enough password or passphrase is used.

Some WAPs support hotspot style authentication using RADIUS and other authentication servers.

Scale-free network

Written by SEPTA MUNARDI on at 07.13

From Wikipedia, the free encyclopedia

A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. That is, the fraction P(k) of nodes in the network having k connections to other nodes goes for large values of k as P(k) ~ k−γ where γ is a constant whose value is typically in the range, although occasionally it may lie outside these bounds.

Scale-free networks are noteworthy because many empirically observed networks appear to be scale-free, including the world wide web, protein networks, citation networks, and some social networks.

Highlights

Scale-free networks show a power law degree distribution like many real networks. The mechanism of preferential attachment has been proposed as an underlying generative model to explain power law degree distributions in some networks. It has also been demonstrated  that scale-free topologies in networks of fixed sizes can arise as a result of Dual Phase Evolution.

History

In studies of the networks of citations between scientific papers, Derek de Solla Price showed in 1965 that the number of links to papers—i.e., the number of citations they receive—had a heavy-tailed distribution following a Pareto distribution or power law, and thus that the citation network was scale-free. He did not however use the term "scale-free network" (which was not coined until some decades later). In a later paper in 1976, Price also proposed a mechanism to explain the occurrence of power laws in citation networks, which he called "cumulative advantage" but which is today more commonly known under the name preferential attachment.

Recent interest in scale-free networks started in 1999 with work by Albert-László Barabási and colleagues at the University of Notre Dame who mapped the topology of a portion of the Web (Barabási and Albert 1999), finding that some nodes, which they called "hubs", had many more connections than others and that the network as a whole had a power-law distribution of the number of links connecting to a node.

After finding that a few other networks, including some social and biological networks, also had heavy-tailed degree distributions, Barabási and collaborators coined the term "scale-free network" to describe the class of networks that exhibit a power-law degree distribution. Soon after, Amaral et al. showed that most of the real-world networks can be classified into two large categories according to the decay of P(k) for large k.

Barabási and Albert proposed a mechanism to explain the appearance of the power-law distribution, which they called "preferential attachment" and which is essentially the same as that proposed by Price. Analytic solutions for this mechanism (also similar to the solution of Price) were presented in 2000 by Dorogovtsev, Mendes and Samukhin and independently by Krapivsky, Redner, and Leyvraz, and later rigorously proved by mathematician Béla Bollobás. Notably, however, this mechanism only produces a specific subset of networks in the scale-free class, and many alternative mechanisms have been discovered since.

Although the scientific community is still debating the usefulness of the scale-free term in reference to networks, Li et al. (2005) recently offered a potentially more precise "scale-free metric". Briefly, let g be a graph with edge-set ε, and let the degree (number of edges) at a vertex i be di. Define

This is maximised when high-degree nodes are connected to other high-degree nodes. Now define

where smax is the maximum value of s(h) for h in the set of all graphs with an identical degree distribution to g. This gives a metric between 0 and 1, such that graphs with low S(g) are "scale-rich", and graphs with S(g) close to 1 are "scale-free". This definition captures the notion of self-similarity implied in the name "scale-free".

Characteristics and examples

 As with all systems characterized by a power law distribution, the most notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and are thought to serve specific purposes in their networks, although this depends greatly on the domain. The power law distribution highly influences the network topology. It turns out that the major hubs are closely followed by smaller ones. These, in turn, are followed by other nodes with an even smaller degree and so on. This hierarchy allows for a fault tolerant behavior. Since failures occur at random and the vast majority of nodes are those with small degree, the likelihood that a hub would be affected is almost negligible. Even if such event occurs, the network will not lose its connectedness, which is guaranteed by the remaining hubs. On the other hand, if we choose a few major hubs and take them out of the network, it simply falls apart and is turned into a set of rather isolated graphs. Thus hubs are both the strength of scale-free networks and their Achilles' heel.

Another important characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. That means that the low-degree nodes belong to very dense sub-graphs and those sub-graphs are connected to each other through hubs. Consider a social network in which nodes are people and links are acquaintance relationships between people. It is easy to see that people tend to form communities, i.e., small groups in which everyone knows everyone (one can think of such community as a complete graph). In addition, the members of a community also have a few acquaintance relationships to people outside that community. Some people, however, are so related to other people (e.g., celebrities, politicians) that they are connected to a large number of communities. Those people may be considered the hubs responsible for making such networks small-world networks.

At present, the more specific characteristics of scale-free networks can only be discussed in either the context of the generative mechanism used to create them, or the context of a particular real-world network thought to be scale-free. For instance, networks generated by preferential attachment typically place the high-degree vertices in the middle of the network, connecting them together to form a core, with progressively lower-degree nodes making up the regions between the core and the periphery. Many interesting results are known for this subclass of scale-free networks. For instance, the random removal of even a large fraction of vertices impacts the overall connectedness of the network very little, suggesting that such topologies could be useful for security, while targeted attacks destroys the connectedness very quickly. Other scale-free networks, which place the high-degree vertices at the periphery, do not exhibit these properties; notably, the structure of the Internet is more like this latter kind of network than the kind built by preferential attachment. Indeed, many of the results about scale-free networks have been claimed to apply to the Internet, but are disputed by Internet researchers and engineers.

As with most disordered networks, such as the small world network model, the average distance between two vertices in the network is very small relative to a highly ordered network such as a lattice. The clustering coefficient of scale-free networks can vary significantly depending on other topological details, and there are now generative mechanisms that allow one to create such networks that have a high density of triangles.

It is interesting that Cohen and Havlin proved that uncorrelated power-law graphs having 2 < γ <>

Although many real-world networks are thought to be scale-free, the evidence remains inconclusive, primarily because the generative mechanisms proposed have not been rigorously validated against the real-world data. As such, it is too early to rule out alternative hypotheses. A few examples of networks claimed to be scale-free include:

  • Some social networks, including collaboration networks. An example that has been studied extensively is the collaboration of movie actors in films.
  • Protein-Protein interaction networks. Networks of sexual partners in humans, which affects the dispersal of sexually transmitted diseases.
  • Many kinds of computer networks, including the World Wide Web.
  • Semantic networks. 

Generative models

These scale-free networks do not arise by chance alone. Erdős and Rényi (1960) studied a model of growth for graphs in which, at each step, two nodes are chosen uniformly at random and a link is inserted between them. The properties of these random graphs are not consistent with the properties observed in scale-free networks, and therefore a model for this growth process is needed.

The scale-free properties of the Web have been studied, and its distribution of links is very close to a power law, because there are a few Web sites with huge numbers of links, which benefit from a good placement in search engines and an established presence on the Web. Those sites are the ones that attract more of the new links. This has been called the winner takes all phenomenon.

The most widely known generative model for a subset of scale-free networks is Barabási and Albert's (1999) rich get richer generative model in which each new Web page creates links to existing Web pages with a probability distribution which is not uniform, but proportional to the current in-degree of Web pages. This model was originally discovered by Derek J. de Solla Price in 1965 under the term cumulative advantage, but did not reach popularity until Barabási rediscovered the results under its current name (BA Model). According to this process, a page with many in-links will attract more in-links than a regular page. This generates a power-law but the resulting graph differs from the actual Web graph in other properties such as the presence of small tightly connected communities. More general models and networks characteristics have been proposed and studied (for a review see the book by Dorogovtsev and Mendes).

A different generative model is the copy model studied by Kumar et al. (2000), in which new nodes choose an existent node at random and copy a fraction of the links of the existent node. This also generates a power law.

However, if we look at communities of interests in a specific topic, discarding the major hubs of the Web, the distribution of links is no longer a power law but resembles more a normal distribution, as observed by Pennock et al. (2002) in the communities of the home pages of universities, public companies, newspapers and scientists. Based on these observations, they propose a generative model that mixes preferential attachment with a baseline probability of gaining a link.

The growth of the networks (adding new nodes) is not a necessary condition for creating a scale-free topology. For instance, it has been shown that Dual Phase Evolution can produce scale-free topologies in networks of a fixed size [1]. Dangalchev (2004) gives examples of generating static scale-free networks. Another possibility (Caldarelli et al. 2002) is to consider the structure as static and draw a link between vertices according to a particular property of the two vertices involved. Once specified the statistical distribution for these vertices properties (fitnesses), it turns out that in some circumstances also static networks develop scale-free properties.

Recently, Manev and Manev (Med. Hypotheses, 2005) proposed that small world networks may be operative in adult brain neurogenesis. Adult neurogenesis has been observed in mammalian brains, including those of humans, but a question remains: how do new neurons become functional in the adult brain? It is proposed that the random addition of only a few new neurons functions as a maintenance system for the brain's "small-world" networks. Randomly added to an orderly network, new links enhance signal propagation speed and synchronizability. Newly generated neurons are ideally suited to become such links: they are immature, form more new connections compared to mature ones, and their number but not their precise location may be maintained by continuous proliferation and dying off. Similarly, it is envisaged that the treatment of brain pathologies by cell transplantation would also create new random links in small-world networks and that even a small number of successfully incorporated new neurons may be functionally important.

Why I think PayDotCom is the Best Affiliate Marketplace on the Net!

Written by SEPTA MUNARDI on Senin, 24 Agustus 2009 at 07.48

Hi

septa munardi here...

If you are familiar with Clickbank.com (R), or even if you are not but you want to make profits online, then you will want to check this out ASAP ...

While I like Clickbank, and they are a great marketplace... they are limited to many restrictions to sell products or earn affiliate commissions...

Well, there is a GREAT NEW SERVICE now...

It is a new FREE marketplace where you can sell any product you want.

Yours OWN product...

- OR - (the best part)

You can become an INSTANT Affiliate for ANY item in their HUGE marketplace.

It is called PayDotCom.com!

Did I mention it is 100% FREE to Join!

This site is going to KILL all other marketplaces and I by now, almost EVERY SINGLE SERIOUS online marketer has an account with PayDotCom.com

So get yours now and see how much they offer...

OH! - Also, they have their won affiliate program now that pays you COLD HARD cash just for sharing the site with people like I am doing with you...

They give you cool tools like BLOG WIDGETS, and they even have an advertising program to help you get traffic to your site.

If you want an ARMY of affiliates to sell your products for you, they also allow you to have Free placement in their marketplace!

Even better... If your product becomes one of the Top 25 products in its category in the marketplace (not that hard to do)...

...then you will get Free advertising on the Blog Widget which is syndicated on THOUSANDS of sites World Wide and get Millions of impressions per month.

So, what are you waiting for...

PayDotCom.com ROCKS!

Get your FREE account now...

http://paydotcom.net/?affiliate=622873

Thanks,

septa munardi

P.S. - Make sure to get your Account NOW while it is Free to join.

Home network

Written by SEPTA MUNARDI on Minggu, 23 Agustus 2009 at 06.34

From Wikipedia, the free encyclopedia

A home network or home area network (HAN) is a residential local area network, and is used to connect multiple devices within the home.

The simplest home networks are used to connect 2 or more PCs for sharing files, printers, and a single connection to the Internet (usually broadband Internet through a cable or DSL provider). A Home server can be added for increased functionality.

More recently telephone companies such as AT&T and British Telecom have been using home networking to provide triple play services (voice, video and data) to customers. These use IPTV to provide the video service. The home network usually operates over the existing home wiring (coax in North America, phone wires in multi dwelling units (MDU) and powerline in Europe). These home networks are often professionally installed and managed by the telco. The ITU-T G.hn standard, which provides high-speed (up to 1 Gbit/s) local area networking over existing home wiring (power lines, phone lines and coaxial cables), is an example of a home networking technology designed specifically for IPTV delivery.

Network Devices
A home network may consist of the following components:
  • A broadband modem for connection to the internet (either a DSL modem using the phone line, or cable modem using the cable internet connection).
  • A residential gateway (sometimes called a router) connected between the broadband modem and the rest of the network. This enables multiple devices to connect to the internet simultaneously. Residential gateways, hubs/switches, DSL modems, and wireless access points are often combined.
  • A PC, or multiple PCs including laptops
  • A wireless access point, usually implemented as a feature rather than a separate box, for connecting wireless devices
  • Entertainment peripherals - an increasing number of devices can be connected to the home network, including DVRs like TiVo, digital audio players, games machines, stereo system, and IP set-top box.
  • Internet Phones (VoIP)
  • A network bridge connects two networks together, often giving a wired device, e.g. Xbox, access to a wireless network.
  • A network hub/switch - a central networking hub containing a number of Ethernet ports for connecting multiple networked devices
  • A network attached storage (NAS) device can be used for storage on the network.
  • A print server can be used to share printers among computers on the network.

Older devices may not have the appropriate connector to the network. USB and PCI network controllers can be installed in some devices to allow them to connect to networks.

Network devices may also be configured from a computer. For example, broadband modems are often configured through a web client on a networked PC. As networking technology evolves, more electronic devices and home appliances are becoming Internet ready and accessible through the home network. Set-top boxes from cable TV providers already have USB and Ethernet ports "for future use".

Network media

Ethernet cables are the standard medium for networks. However, homes are often more difficult to wire than office environments, and other technologies are being developed which don't require new wires.

Home networking may use

  • Ethernet Category 5 cable, Category 6 cable - for speeds of 10 Mbit/s, 100 Mbit/s, or 1 Gbit/s.
  • Wi-Fi Wireless LAN connections - for speeds up to 248 Mbit/s, dependent on signal strength and wireless standard.
  • Coaxial cables (TV antennas) - for speeds of 270 Mbit/s (see Multimedia over Coax Alliance or 320 Mbit/s see HomePNA)
  • Electrical wiring - for speeds of 14 Mbit/s to 200 Mbit/s (see Power line communication)
  • Phone wiring - for speeds of 160 Mbit/s (see HomePNA)
  • Fiber optics - although rare, new homes are beginning to include fiber optics for future use. Optical networks generally use Ethernet.
  • All home wiring (coax, powerline and phone wires) - future standard for speeds up to 1 Gbit/s being developed by the ITU-T (see G.hn)

Ethernet and Wireless are the most common standards. As the demand for home networks has increased, the other alliances have formed to produce standards for networking alternatives.

Wireless network

Written by SEPTA MUNARDI on at 06.11

From Wikipedia, the free encyclopedia

Wireless network refers to any type of computer network that is wireless, and is commonly associated with a telecommunications network whose interconnections between nodes is implemented without the use of wires. Wireless telecommunications networks are generally implemented with some type of remote information transmission system that uses electromagnetic waves, such as radio waves, for the carrier and this implementation usually takes place at the physical level or "layer" of the network.

Types

Wireless PAN

Wireless Personal Area Network (WPAN) is a type of wireless network that interconnects devices within a relatively small area, generally within reach of a person. For example, Bluetooth provides a WPAN for interconnecting a headset to a laptop. ZigBee also supports WPAN applications.

Wireless LAN

Wireless Local Area Network (WLAN) is a wireless alternative to a computer Local Area Network (LAN) that uses radio instead of wires to transmit data back and forth between computers in a small area such as a home, office, or school. Wireless LANs are standardized under the IEEE 802.11 series.

  • Wi-Fi: Wi-Fi is a commonly used wireless network in computer systems to enable connection to the internet or other devices that have Wi-Fi functionalities. Wi-Fi networks broadcast radio waves that can be picked up by Wi-Fi receivers attached to different computers or mobile phones.
  • Fixed Wireless Data: This implements point to point links between computers or networks at two locations, often using dedicated microwave or laser beams over line of sight paths. It is often used in cities to connect networks in two or more buildings without physically wiring the buildings together.

Wireless MAN

Wireless Metropolitan area networks are a type of wireless network that connects several Wireless LANs.

  • WiMAX is the term used to refer to wireless MANs and is covered in IEEE 802.16d/802.16e.

Mobile devices networks

In recent decades with the development of smart phones, cellular telephone networks have been used to carry computer data in addition to telephone conversations:

  • Global System for Mobile Communications (GSM): The GSM network is divided into three major systems: the switching system, the base station system, and the operation and support system. The cell phone connects to the base system station which then connects to the operation and support station; it then connects to the switching station where the call is transferred to where it needs to go. GSM is the most common standard and is used for a majority of cell phones.
  • Personal Communications Service (PCS): PCS is a radio band that can be used by mobile phones in North America and South Asia. Sprint happened to be the first service to set up a PCS.
  • D-AMPS: D-AMPS, which stands for Digital Advanced Mobile Phone Service, is an upgraded version of AMPS but it is being phased out due to advancement in technology. The newer GSM networks are replacing the older system.

Uses

Wireless networks have had a significant impact on the world as far back as World War II. Through the use of wireless networks, information could be sent overseas or behind enemy lines easily, efficiently and more reliably. Since then, wireless networks have continued to develop and their uses have grown significantly. Cellular phones are part of huge wireless network systems. People use these phones daily to communicate with one another. Sending information overseas is possible through wireless network systems using satellites and other signals to communicate across the world. Emergency services such as the police department utilize wireless networks to communicate important information quickly. People and businesses use wireless networks to send and share data quickly whether it be in a small office building or across the world.

Another important use for wireless networks is as an inexpensive and rapid way to be connected to the Internet in countries and regions where the telecom infrastructure is poor or there is a lack of resources, as in most developing countries.

Compatibility issues also arise when dealing with wireless networks. Different components not made by the same company may not work together, or might require extra work to fix these issues. Wireless networks are typically slower than those that are directly connected through an Ethernet cable.

A wireless network is more vulnerable, because anyone can try to break into a network broadcasting a signal. Many networks offer WEP - Wired Equivalent Privacy - security systems which have been found to be vulnerable to intrusion. Though WEP does block some intruders, the security problems have caused some businesses to stick with wired networks until security can be improved. Another type of security for wireless networks is WPA - Wi-Fi Protected Access. WPA provides more security to wireless networks than a WEP security set up. The use of firewalls will help with security breaches which can help to fix security problems in some wireless networks that are more vulnerable.

Environmental concerns and health hazard

In recent times, there have been increased concerns about the safety of wireless communications, despite little evidence of health risks so far. The president of Lakehaed University refused to sign off on installation of a wireless network citing a California Public Utilities Commission study which said that the possible risk of tumors and other diseases due to exposure to electromagnetic fields (EMFs) needs to be further investigated.

Computer networking

Written by SEPTA MUNARDI on at 05.31

From Wikipedia, the free encyclopedia

Computer networking is the engineering discipline concerned with communication between computer systems or devices. Networking, routers, routing protocols, and networking over the public Internet have their specifications defined in documents called RFCs. Computer networking is sometimes considered a sub-discipline of telecommunications, computer science, information technology and/or computer engineering. Computer networks rely heavily upon the theoretical and practical application of these scientific and engineering disciplines. There are three types of networks: 1.Internet. 2.Intranet. 3.Extranet. A computer network is any set of computers or devices connected to each other with the ability to exchange data. Examples of different networks are:

  • Local area network (LAN), which is usually a small network constrained to a small geographic area.
  • Wide area network (WAN) that is usually a larger network that covers a large geographic area.
  • Wireless LANs and WANs (WLAN & WWAN) are the wireless equivalent of the LAN and WAN.

All networks are interconnected to allow communication with a variety of different kinds of media, including twisted-pair copper wire cable, coaxial cable, optical fiber, power lines and various wireless technologies. The devices can be separated by a few meters (e.g. via Bluetooth) or nearly unlimited distances (e.g. via the interconnections of the Internet).

Views of networks

Users and network administrators often have different views of their networks. Often, users who share printers and some servers form a workgroup, which usually means they are in the same geographic location and are on the same LAN. A community of interest has less of a connotation of being in a local area, and should be thought of as a set of arbitrarily located users who share a set of servers, and possibly also communicate via peer-to-peer technologies.

Network administrators see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application layer gateways that interconnect the physical media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more physical media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using virtual LAN (VLAN) technology.

Both users and administrators will be aware, to varying extents, of the trust and scope characteristics of a network. Again using TCP/IP architectural terminology, an intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees). Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers).

Informally, the Internet is the set of users, enterprises,and content providers that are interconnected by Internet Service Providers (ISP). From an engineering standpoint, the Internet is the set of subnets, and aggregates of subnets, which share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of the Domain Name System (DNS).

Over the Internet, there can be business-to-business (B2B), business-to-consumer (B2C) and consumer-to-consumer (C2C) communications. Especially when money or sensitive information is exchanged, the communications are apt to be secured by some form of communications security mechanism. Intranets and extranets can be securely superimposed onto the Internet, without any access by general Internet users, using secure Virtual Private Network (VPN) technology.

When used for gaming one computer will have to be the server while the others play through it.

History

Before the advent of computer networks that were based upon some type of telecommunications system, communication between calculation machines and early computers was performed by human users by carrying instructions between them. Many of the social behavior seen in today's Internet was demonstrably present in nineteenth-century telegraph networks, and arguably in even earlier networks using visual signals.

In September 1940 George Stibitz used a teletype machine to send instructions for a problem set from his Model K at Dartmouth College in New Hampshire to his Complex Number Calculator in New York and received results back by the same means. Linking output systems like teletypes to computers was an interest at the Advanced Research Projects Agency (ARPA) when, in 1962, J.C.R. Licklider was hired and developed a working group he called the "Intergalactic Network", a precursor to the ARPANet.

In 1964, researchers at Dartmouth developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at MIT, a research group supported by General Electric and Bell Labs used a computer (DEC's PDP-8) to route and manage telephone connections.

Throughout the 1960s Leonard Kleinrock, Paul Baran and Donald Davies independently conceptualized and developed network systems which used datagrams or packets that could be used in a packet switched network between computer systems.

1965 Thomas Merrill and Lawrence G. Roberts created the first wide area network(WAN).

The first widely used PSTN switch that used true computer control was the Western Electric 1ESS switch, introduced in 1965.

In 1969 the University of California at Los Angeles, SRI (in Stanford), University of California at Santa Barbara, and the University of Utah were connected as the beginning of the ARPANet network using 50 kbit/s circuits. Commercial services using X.25 were deployed in 1972, and later used as an underlying infrastructure for expanding TCP/IP networks.

Computer networks, and the technologies needed to connect and communicate through and between them, continue to drive computer hardware, software, and peripherals industries. This expansion is mirrored by growth in the numbers and types of users of networks from the researcher to the home user.

Today, computer networks are the core of modern communication. For example, all modern aspects of the Public Switched Telephone Network (PSTN) are computer-controlled, and telephony increasingly runs over the Internet Protocol, although not necessarily the public Internet. The scope of communication has increased significantly in the past decade and this boom in communications would not have been possible without the progressively advancing computer network.

Networking methods

Networking is a complex part of computing that makes up most of the IT Industry. Without networks, almost all communication in the world would cease to happen. It is because of networking that telephones, televisions, the internet, etc. work.

One way to categorize computer networks is by their geographic scope, although many real-world networks interconnect Local Area Networks (LAN) via Wide Area Networks (WAN)and wireless networks[WWAN]. These three (broad) types are:

Local area network (LAN)

A local area network is a network that spans a relatively small space and provides services to a small number of people.

A peer-to-peer or client-server method of networking may be used. A peer-to-peer network is where each client shares their resources with other workstations in the network. Examples of peer-to-peer networks are: Small office networks where resource use is minimal and a home network. A client-server network is where every client is connected to the server and each other. Client-server networks use servers in different capacities. These can be classified into two types:

  1. Single-service servers
  2.  print server,

where the server performs one task such as file server, ; while other servers can not only perform in the capacity of file servers and print servers, but they also conduct calculations and use these to provide information to clients (Web/Intranet Server). Computers may be connected in many different ways, including Ethernet cables, Wireless networks, or other types of wires such as power lines or phone lines.

The ITU-T G.hn standard is an example of a technology that provides high-speed (up to 1 Gbit/s) local area networking over existing home wiring (power lines, phone lines and coaxial cables).

Wide area network (WAN)

A wide area network is a network where a wide variety of resources are deployed across a large domestic area or internationally. An example of this is a multinational business that uses a WAN to interconnect their offices in different countries. The largest and best example of a WAN is the Internet, which is a network composed of many smaller networks. The Internet is considered the largest network in the world.[7]. The PSTN (Public Switched Telephone Network) also is an extremely large network that is converging to use Internet technologies, although not necessarily through the public Internet.

A Wide Area Network involves communication through the use of a wide range of different technologies. These technologies include Point-to-Point WANs such as Point-to-Point Protocol (PPP) and High-Level Data Link Control (HDLC), Frame Relay, ATM (Asynchronous Transfer Mode) and Sonet (Synchronous Optical Network). The difference between the WAN technologies is based on the switching capabilities they perform and the speed at which sending and receiving bits of information (data) occur.

Metropolitan Area Network (MAN)

A metropolitan network is a network that is too large for even the largest of LAN's but is not on the scale of a WAN. It also integrates two or more LAN networks over a specific geographical area ( usually a city ) so as to increase the network and the flow of communications. The LAN's in question would usually be connected via "backbone" lines.

For more information on WANs, see Frame Relay, ATM and Sonet.

Wireless networks (WLAN, WWAN)

A wireless network is basically the same as a LAN or a WAN but there are no wires between hosts and servers. The data is transferred over sets of radio transceivers. These types of networks are beneficial when it is too costly or inconvenient to run the necessary cables. For more information, see Wireless LAN and Wireless wide area network. The media access protocols for LANs come from the IEEE.

The most common IEEE 802.11 WLANs cover, depending on antennas, ranges from hundreds of meters to a few kilometers. For larger areas, either communications satellites of various types, cellular radio, or wireless local loop (IEEE 802.16) all have advantages and disadvantages. Depending on the type of mobility needed, the relevant standards may come from the IETF or the ITU.

Network topology

The network topology defines the way in which computers, printers, and other devices are connected, physically and logically. A network topology describes the layout of the wire and devices as well as the paths used by data transmissions.

Network topology has two types:

  • Physical
  • logical

Commonly used topologies include:

  • Bus
  • Star
  • Tree (hierarchical)
  • Linear
  • Ring
  • Mesh
  • partially connected
  • fully connected (sometimes known as fully redundant)

The network topologies mentioned above are only a general representation of the kinds of topologies used in computer network and are considered basic topologies.

As a matter of fact networking is defined by the standard of OSI (Open Systems Interconnection) reference for communications. The OSI model consists of seven layers. Each layer has its own function. The OSI model layers are Application, Presentation, Session, Transport, Network, Data Link, and Physical. The upper layers (Application, Presentation, Session) of the OSI model concentrate on the application while the lower layers (transport, network, data link, and physical) focus on signal flow of data from origin to destination. The Application layer defines the medium that communications software and any applications need to communicate to other computers. Layer 6 which is the presentation layer focuses on defining data formats such as text, jpeg, gif, and binary. An example of this layer would be displaying a picture that was received in an e-mail. The 5th Layer is the session layer which establishes how to start, control, and end links or conversations. The transport layer includes protocols that allow it to provide functions in many different areas such as: error recovery, segmentation, and reassembly. The network layers primary job is the end to end delivery of data packets. To do this, the network layer relies on logical addressing so that the origin and destination point can both be recognized. An example of this would be, ip running in a router’s job is to examine the destination address, compare the address to the ip routing table, separate the packet into smaller chunks for transporting purposes, and then deliver the packet to the correct receiver. Layer 2 is the data link layer, which sets the standards for data being delivered across a link or medium. The 1st layer is the physical layer which deals with the physical characteristics of the transmission of data such as the network card and network cable type. An easy way to remember the layers of OSI is to remember All People Seem To Need Data Processing (Layers 7 to 1).

Computer networking device

Written by SEPTA MUNARDI on at 05.10

From Wikipedia, the free encyclopedia

A full list of Computer networking devices are units that mediate data in a computer network. Computer networking devices are also called network equipment, Intermediate Systems (IS) or InterWorking Unit (IWU). Units which are the last receiver or generate data are called hosts or data terminal equipment.

List of computer networking devices

Common basic networking devices:

  • Gateway: device sitting at a network node for interfacing with another network that uses different protocols. Works on OSI layers 4 to 7.
  • Router: a specialized network device that determines the next network point to which to forward a data packet toward its destination. Unlike a gateway, it cannot interface different protocols. Works on OSI layer 3.
  • Bridge: a device that connects multiple network segments along the data link layer. Works on OSI layer 2.
  • Switch: a device that allocates traffic from one network segment to certain lines (intended destination(s)) which connect the segment to another network segment. So unlike a hub a switch splits the network traffic and sends it to different destinations rather than to all systems on the network. Works on OSI layer 2.
  • Hub: connects multiple Ethernet segments together making them act as a single segment. When using a hub, every attached device shares the same broadcast domain and the same collision domain. Therefore, only one computer connected to the hub is able to transmit at a time. Depending on the network topology, the hub provides a basic level 1 OSI model connection among the network objects (workstations, servers, etc). It provides bandwidth which is shared among all the objects, compared to switches, which provide a dedicated connection between individual nodes. Works on OSI layer 1.
  • Repeater: device to amplify or regenerate digital signals received while setting them from one part of a network into another. Works on OSI layer 1.

Some hybrid network devices:

  • Multilayer Switch: a switch which, in addition to switching on OSI layer 2, provides functionality at higher protocol layers.
  • Protocol Converter: a hardware device that converts between two different types of transmissions, such as asynchronous and synchronous transmissions.
  • Bridge Router(Brouter): Combine router and bridge functionality and are therefore working on OSI layers 2 and 3.
  • Digital media receiver: Connects a computer network to a home theatre

Hardware or software components that typically sit on the connection point of different networks, e.g. between an internal network and an external network:

  • Proxy: computer network service which allows clients to make indirect network connections to other network services
  • Firewall: a piece of hardware or software put on the network to prevent some communications forbidden by the network policy
  • Network Address Translator: network service provide as hardware or software that converts internal to external network addresses and vice versa

Other hardware for establishing networks or dial-up connections:

  • Multiplexer: device that combines several electrical signals into a single signal
  • Network Card: a piece of computer hardware to allow the attached computer to communicate by network
  • Modem: device that modulates an analog "carrier" signal (such as sound), to encode digital information, and that also demodulates such a carrier signal to decode the transmitted information, as a computer communicating with another computer over the telephone network ISDN terminal adapter (TA): a specialized gateway for ISDN
  • Line Driver: a device to increase transmission distance by amplifying the signal. Base-band networks only.
  • Network Device Connectivity

Computer network diagram

Written by SEPTA MUNARDI on at 04.59

From Wikipedia, the free encyclopedia

A computer network diagram is a schematic depicting the nodes and connections amongst nodes in a computer network or, more generally, any telecommunications network.

Symbolization

Readily identifiable icons are used to depict common network appliances e.g. Router, and the style of lines between them indicate the type of connection. Clouds are used to represent networks external to the one pictured for the purposes of depicting connections between internal and external devices, without indicating the specifics of the outside network. For example, in the hypothetical local area network pictured to the right, three personal computers and a server are connected to a switch; the server is further connected to a printer and a gateway router, which is connected via a WAN link to the Internet.

Depending on whether the diagram is intended for formal or informal use, certain details may be lacking and must be determined from context. For example, the sample diagram does not indicate the physical type of connection between the PCs and the switch, but since a modern LAN is depicted, Ethernet may be assumed. If the same style of line was used in a WAN (wide area network) diagram, however, it may indicate a different physical connection.

At different scales, diagrams may represent various levels of network granularity. At the LAN level, individual nodes may represent individual physical devices, such as hubs or file servers, while at the WAN level, individual nodes may represent entire cities. In addition, when the scope of a diagram crosses the common LAN/MAN/WAN boundaries, representative hypothetical devices may be depicted instead of showing all actually existing nodes. For example, if a network appliance is intended to be connected through the Internet to many end-user mobile devices, only a single such device may be depicted for the purposes of showing the general relationship between the appliance and any such device.

Cisco Symbolization

Cisco uses its own brand of networking symbols. Since Cisco has a large Internet presence and designs a broad variety of network devices, its list of symbols ("Network Topology Icons") is exhaustive. As of November 28, 2006 this list can be found at http://www.cisco.com/web/about/ac50/ac47/2.html

Topology

The physical network topology can be directly represented in a network diagram, as it is simply the physical graph (mathematics) represented by the diagrams, with network nodes as vertices and connections as undirected or direct edges (depending on the type of connection). The logical network topology can be inferred from the network diagram if details of the network protocols in use are also given.

Client-server

Written by SEPTA MUNARDI on at 04.50

From Wikipedia, the free encyclopedia

Client-server computing or networking is a distributed application architecture that partitions tasks or work loads between service providers (servers) and service requesters, called clients.[1] Often clients and servers operate over a computer network on separate hardware. A server machine is a high-performance host that is running one or more server programs which share its resources with clients. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await (listen to) incoming requests.

Description

Client-server describes the relationship between two computer programs in which one program, the client program, makes a service request to another, the server program. Standard networked functions such as email exchange, web access and database access, are based on the client-server model. For example, a web browser is a client program at the user computer that may access information at any web server in the world. To check your bank account from your computer, a web browser client program in your computer forwards your request to a web server program at the bank. That program may in turn forward the request to its own database client program that sends a request to a database server at another bank computer to retrieve your account balance. The balance is returned to the bank database client, which in turn serves it back to the web browser client in your personal computer, which displays the information for you.

The client-server model has become one of the central ideas of network computing. Many business applications being written today use the client-server model. So do the Internet's main application protocols, such as HTTP, SMTP, Telnet, DNS. In marketing, the term has been used to distinguish distributed computing by smaller dispersed computers from the "monolithic" centralized computing of mainframe computers. But this distinction has largely disappeared as mainframes and their applications have also turned to the client-server model and become part of network computing.

Each instance of the client software can send data requests to one or more connected servers. In turn, the servers can accept these requests, process them, and return the requested information to the client. Although this concept can be applied for a variety of reasons to many different kinds of applications, the architecture remains fundamentally the same.

The most basic type of client-server architecture employs only two types of hosts: clients and servers. This type of architecture is sometimes referred to as two-tier. It allows devices to share files and resources. The two tier architecture means that the client acts as one tier and application in combination with server acts as another tier.

The interaction between client and server is often described using sequence diagrams. Sequence diagrams are standardized in the Unified Modeling Language.

Specific types of clients include web browsers, email clients, and online chat clients.

Specific types of servers include web servers, ftp servers, application servers, database servers, name servers, mail servers, file servers, print servers, and terminal servers. Most web services are also types of servers.

Comparison to peer-to-peer architecture

In peer-to-peer architectures, each host or instance of the program can simultaneously act as both a client and a server, and each has equivalent responsibilities and status.

Both client-server and peer-to-peer architectures are in wide usage today. Details may be found in Comparison of Centralized (Client-Server) and Decentralized (Peer-to-Peer) Networking.

Comparison to client-queue-client architecture

While classic client-server architecture requires one of the communication endpoints to act as a server, which is much harder to implement, Client-Queue-Client allows all endpoints to be simple clients, while the server consists of some external software, which also acts as passive queue (one software instance passes its query to another instance to queue, e.g. database, and then this other instance pulls it from database, makes a response, passes it to database etc.). This architecture allows greatly simplified software implementation. Peer-to-peer architecture was originally based on the Client-Queue-Client concept.

Advantages

In most cases, a client-server architecture enables the roles and responsibilities of a computing system to be distributed among several independent computers that are known to each other only through a network. This creates an additional advantage to this architecture: greater ease of maintenance. For example, it is possible to replace, repair, upgrade, or even relocate a server while its clients remain both unaware and unaffected by that change.

All the data is stored on the servers, which generally have far greater security controls than most clients. Servers can better control access and resources, to guarantee that only those clients with the appropriate permissions may access and change data.

Since data storage is centralized, updates to that data are far easier to administer than what would be possible under a P2P paradigm. Under a P2P architecture, data updates may need to be distributed and applied to each peer in the network, which is both time-consuming and error-prone, as there can be thousands or even millions of peers.

Many mature client-server technologies are already available which were designed to ensure security, friendliness of the user interface, and ease of use.

It functions with multiple different clients of different capabilities.

Disadvantages

Traffic congestion on the network has been an issue since the inception of the client-server paradigm. As the number of simultaneous client requests to a given server increases, the server can become overloaded. Contrast that to a P2P network, where its aggregated bandwidth actually increases as nodes are added, since the P2P network's overall bandwidth can be roughly computed as the sum of the bandwidths of every node in that network.

The client-server paradigm lacks the robustness of a good P2P network. Under client-server, should a critical server fail, clients’ requests cannot be fulfilled. In P2P networks, resources are usually distributed among many nodes. Even if one or more nodes depart and abandon a downloading file, for example, the remaining nodes should still have the data needed to complete the download

Active networking

Written by SEPTA MUNARDI on Sabtu, 22 Agustus 2009 at 21.52

From Wikipedia, the free encyclopedia

Active networking is a communication pattern that allows packets flowing through a telecommunications network to dynamically modify the operation of the network.

How it works

Active network architecture is composed of execution environments (similar to a unix shell that can execute active packets), a node operating system capable of supporting one or more execution environments. It also consists of active hardware, capable of routing or switching as well as executing code within active packets. This differs from the traditional network architecture which seeks robustness and stability by attempting to remove complexity and the ability to change its fundamental operation from underlying network components. Network processors are one means of implementing active networking concepts. Active networks have also been implemented as overlay networks.

What does it offer?

Active networking allows the possibility of highly tailored and rapid "real-time" changes to the underlying network operation. This enables such ideas as sending code along with packets of information allowing the data to change its form (code) to match the channel characteristics. The smallest program that can generate a sequence of data can be found in the definition of Kolmogorov Complexity. The use of real-time genetic algorithms within the network to compose network services is also enabled by active networking.

Fundamental Challenges

Active network research addresses the nature of how best to incorporate extremely dynamic capability within networks.

In order to do this, active network research must address the problem of optimally allocating computation versus communication within communication networks[2]. A similar problem related to the compression of code as a measure of complexity is addressed via algorithmic information theory.

Nanoscale Active Networks

As the limit in reduction of transistor size is reached with current technology, active networking concepts are being explored as a more efficient means accomplishing computation and communication

Wireless LAN part2

Written by SEPTA MUNARDI on Jumat, 21 Agustus 2009 at 09.01

From Wikipedia, the free encyclopedia

Architecture

Stations

All components that can connect into a wireless medium in a network are referred to as stations.

All stations are equipped with wireless network interface cards (WNICs).

Wireless stations fall into one of two categories: access points, and clients.

Access points (APs), normally routers, are base stations for the wireless network. They transmit and receive radio frequencies for wireless enabled devices to communicate with.

Wireless clients can be mobile devices such as laptops, personal digital assistants, IP phones, or fixed devices such as desktops and workstations that are equipped with a wireless network interface.

Basic service set

The basic service set (BSS) is a set of all stations that can communicate with each other.

There are two types of BSS: Independent BSS (also referred to as IBSS), and infrastructure BSS.

Every BSS has an identification (ID) called the BSSID, which is the MAC address of the access point servicing the BSS.

An independent BSS (IBSS) is an ad-hoc network that contains no access points, which means they can not connect to any other basic service set.

An infrastructure can communicate with other stations not in the same basic service set by communicating through access points.

Extended service set

An extended service set (ESS) is a set of connected BSSes. Access points in an ESS are connected by a distribution system. Each ESS has an ID called the SSID which is a 32-byte (maximum) character string.

Distribution system

A distribution system (DS) connects access points in an extended service set. The concept of a DS can be used to increase network coverage through roaming between cells.

Types of wireless LANs

Peer-to-peer

An ad-hoc network is a network where stations communicate only peer to peer (P2P). There is no base and no one gives permission to talk. This is accomplished using the Independent Basic Service Set (IBSS).

A peer-to-peer (P2P) network allows wireless devices to directly communicate with each other. Wireless devices within range of each other can discover and communicate directly without involving central access points. This method is typically used by two computers so that they can connect to each other to form a network.

If a signal strength meter is used in this situation, it may not read the strength accurately and can be misleading, because it registers the strength of the strongest signal, which may be the closest computer.

802.11 specs define the physical layer (PHY) and MAC (Media Access Control) layers. However, unlike most other IEEE specs, 802.11 includes three alternative PHY standards: diffuse infrared operating at 1 Mbit/s in; frequency-hopping spread spectrum operating at 1 Mbit/s or 2 Mbit/s; and direct-sequence spread spectrum operating at 1 Mbit/s or 2 Mbit/s. A single 802.11 MAC standard is based on CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance). The 802.11 specification includes provisions designed to minimize collisions. Because two mobile units may both be in range of a common access point, but not in range of each other. The 802.11 has two basic modes of operation: Ad hoc mode enables peer-to-peer transmission between mobile units. Infrastructure mode in which mobile units communicate through an access point that serves as a bridge to a wired network infrastructure is the more common wireless LAN application the one being covered. Since wireless communication uses a more open medium for communication in comparison to wired LANs, the 802.11 designers also included shared-key encryption mechanisms: Wired Equivalent Privacy (WEP), Wi-Fi Protected Access (WPA, WPA2), to secure wireless computer networks.

Bridge

A bridge can be used to connect networks, typically of different types. A wireless Ethernet bridge allows the connection of devices on a wired Ethernet network to a wireless network. The bridge acts as the connection point to the Wireless LAN.

Wireless distribution system

A Wireless Distribution System is a system that enables the wireless interconnection of access points in an IEEE 802.11 network. It allows a wireless network to be expanded using multiple access points without the need for a wired backbone to link them, as is traditionally required. The notable advantage of WDS over other solutions is that it preserves the MAC addresses of client packets across links between access points.

An access point can be either a main, relay or remote base station. A main base station is typically connected to the wired Ethernet. A relay base station relays data between remote base stations, wireless clients or other relay stations to either a main or another relay base station. A remote base station accepts connections from wireless clients and passes them to relay or main stations. Connections between "clients" are made using MAC addresses rather than by specifying IP assignments.

All base stations in a Wireless Distribution System must be configured to use the same radio channel, and share WEP keys or WPA keys if they are used. They can be configured to different service set identifiers. WDS also requires that every base station be configured to forward to others in the system.

WDS may also be referred to as repeater mode because it appears to bridge and accept wireless clients at the same time (unlike traditional bridging). It should be noted, however, that throughput in this method is halved for all clients connected wirelessly.

When it is difficult to connect all of the access points in a network by wires, it is also possible to put up access points as repeaters.

Roaming

There are 2 definitions for wireless LAN roaming:

  • Internal Roaming (1): The Mobile Station (MS) moves from one access point (AP) to another AP within a home network because the signal strength is too weak. An authentication server (RADIUS) assumes the re-authentication of MS via 802.1x (e.g. with PEAP). The billing of QoS is in the home network. A Mobile Station roaming from one access point to another often interrupts the flow of data between the Mobile Station and an application connected to the network. The Mobile Station, for instance, periodically monitors the presence of alternative access points (ones that will provide a better connection). At some point, based upon proprietary mechanisms, the Mobile Station decides to re-associate with an access point having a stronger wireless signal. The Mobile Station, however, may lose a connection with an access point before associating with another access point. In order to provide reliable connections with applications, the Mobile Station must generally include software that provides session persistence.
  • External Roaming (2): The MS(client) moves into a WLAN of another Wireless Internet Service Provider (WISP) and takes their services (Hotspot). The user can independently of his home network use another foreign network, if this is open for visitors. There must be special authentication and billing systems for mobile services in a foreign network

Wireless LAN part1

Written by SEPTA MUNARDI on at 08.41

From Wikipedia, the free encyclopedia

A wireless LAN (WLAN) is a wireless local area network that links two or more computers or devices using spread-spectrum or OFDM modulation technology based to enable communication between devices in a limited area. This gives users the mobility to move around within a broad coverage area and still be connected to the network.

For the home user, wireless has become popular due to ease of installation, and location freedom with the gaining popularity of laptops. Public businesses such as coffee shops or malls have begun to offer wireless access to their customers; some are even provided as a free service. Large wireless network projects are being put up in many major cities: New York City, for instance, has begun a pilot program to cover all five boroughs of the city with wireless Internet access.

History

In 1970 Norman Abramson, professor at the University of Hawaii, developed the world’s first computer communication network using low-cost ham-like radios, named ALOHAnet. The bi-directional star topology of the system included seven computers deployed over four islands to communicate with the central computer on the Oahu Island without using phone lines.

"In 1979, F.R. Gfeller and U. Bapst published a paper in the IEEE Proceedings reporting an experimental wireless local area network using diffused infrared communications. Shortly thereafter, in 1980, P. Ferrert reported on an experimental application of a single code spread spectrum radio for wireless terminal communications in the IEEE National Telecommunications Conference. In 1984, a comparison between Infrared and CDMA spread spectrum communications for wireless office information networks was published by Kaveh Pahlavan in IEEE Computer Networking Symposium which appeared later in the IEEE Communication Society Magazine. In May 1985, the efforts of Marcus led the FCC to announce experimental ISM bands for commercial application of spread spectrum technology. Later on, M. Kavehrad reported on an experimental wireless PBX system using code division multiple access. These efforts prompted significant industrial activities in the development of a new generation of wireless local area networks and it updated several old discussions in the portable and mobile radio industry.

The first generation of wireless data modems was developed in the early 1980s by amateur radio operators, who commonly referred to this as packet radio. They added a voice band data communication modem, with data rates below 9600-bit/s, to an existing short distance radio system, typically in the two meter amateur band. The second generation of wireless modems was developed immediately after the FCC announcement in the experimental bands for non-military use of the spread spectrum technology. These modems provided data rates on the order of hundreds of kbit/s. The third generation of wireless modem then aimed at compatibility with the existing LANs with data rates on the order of Mbit/s. Several companies developed the third generation products with data rates above 1 Mbit/s and a couple of products had already been announced by the time of the first IEEE Workshop on Wireless LANs

"The first of the IEEE Workshops on Wireless LAN was held in 1991. At that time early wireless LAN products had just appeared in the market and the IEEE 802.11 committee had just started its activities to develop a standard for wireless LANs. The focus of that first workshop was evaluation of the alternative technologies. By 1996, the technology was relatively mature, a variety of applications had been identified and addressed and technologies that enable these applications were well understood. Chip sets aimed at wireless LAN implementations and applications, a key enabling technology for rapid market growth, were emerging in the market. Wireless LANs were being used in hospitals, stock exchanges, and other in building and campus settings for nomadic access, point-to-point LAN bridges, ad-hoc networking, and even larger applications through internetworking. The IEEE 802.11 standard and variants and alternatives, such as the wireless LAN interoperability forum and the European HiperLAN specification had made rapid progress, and the unlicensed PCS Unlicensed Personal Communications Services and the proposed SUPERNet, later on renamed as U-NII, bands also presented new opportunities."

Originally WLAN hardware was so expensive that it was only used as an alternative to cabled LAN in places where cabling was difficult or impossible. Early development included industry-specific solutions and proprietary protocols, but at the end of the 1990s these were replaced by standards, primarily the various versions of IEEE 802.11 (Wi-Fi). An alternative ATM-like 5 GHz standardized technology, HiperLAN/2, has so far not succeeded in the market, and with the release of the faster 54 Mbit/s 802.11a (5 GHz) and 802.11g (2.4 GHz) standards, almost certainly never will.

In November 2007, the Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO) won a legal battle in the US federal court of Texas against Buffalo Technology which found the US manufacturer had failed to pay royalties on a US WLAN patent CSIRO had filed in 1996, however in September 2008, the Federal Circuit remanded the case back to district court[4]. CSIRO then engaged in legal action against fourteen other computer companies including Microsoft, Intel, Dell, Hewlett-Packard and Netgear who argued that the patent is invalid and should negate any royalties paid to CSIRO for WLAN-based products.[5] In 2009, these cases were settled out of court which may see billions in royalties flow to CSIRO. In a statement to the media, CSIRO Chief Excectutive Megan Clark said that "CSIRO will continue to defend intellectual property developed from research undertaken on behalf of the Australian taxpayer."

Benefits

The popularity of wireless LANs is a testament primarily to their convenience, cost efficiency, and ease of integration with other networks and network components. The majority of computers sold to consumers today come pre-equipped with all necessary wireless LAN technology. Benefits of wireless LANs include:

Convenience

  • The wireless nature of such networks allows users to access network resources from nearly any convenient location within their primary networking environment (home or office). With the increasing saturation of laptop-style computers, this is particularly relevant.

Mobility

  • With the emergence of public wireless networks, users can access the internet even outside their normal work environment. Most chain coffee shops, for example, offer their customers a wireless connection to the internet at little or no cost.

Productivity

  • Users connected to a wireless network can maintain a nearly constant affiliation with their desired network as they move from place to place. For a business, this implies that an employee can potentially be more productive as his or her work can be accomplished from any convenient location. For example, a hospital or warehouse may implement Voice over WLAN applications that enable mobility and cost savings.

Deployment

  • Initial setup of an infrastructure-based wireless network requires little more than a single access point. Wired networks, on the other hand, have the additional cost and complexity of actual physical cables being run to numerous locations (which can even be impossible for hard-to-reach locations within a building).

Expandability

  • Wireless networks can serve a suddenly-increased number of clients with the existing equipment. In a wired network, additional clients would require additional wiring.

Cost

  • Wireless networking hardware is at worst a modest increase from wired counterparts. This potentially increased cost is almost always more than outweighed by the savings in cost and labor associated to running physical cables.

Disadvantages

Wireless LAN technology, while replete with the conveniences and advantages described above, has its share of downfalls. For a given networking situation, wireless LANs may not be desirable for a number of reasons. Most of these have to do with the inherent limitations of the technology.

Security

  • Wireless LAN transceivers are designed to serve computers throughout a structure with uninterrupted service using radio frequencies. Because of space and cost, the antennas typically present on wireless networking cards in the end computers are generally relatively poor. In order to properly receive signals using such limited antennas throughout even a modest area, the wireless LAN transceiver utilizes a fairly considerable amount of power. What this means is that not only can the wireless packets be intercepted by a nearby adversary's poorly-equipped computer, but more importantly, a user willing to spend a small amount of money on a good quality antenna can pick up packets at a remarkable distance; perhaps hundreds of times the radius as the typical user. In fact, there are even computer users dedicated to locating and sometimes even cracking into wireless networks, known as wardrivers. On a wired network, any adversary would first have to overcome the physical limitation of tapping into the actual wires, but this is not an issue with wireless packets. To combat this consideration, wireless networks users usually choose to utilize various encryption technologies available such as Wi-Fi Protected Access (WPA). Some of the older encryption methods, such as WEP are known to have weaknesses that a dedicated adversary can compromise. (See main article: Wireless security.)

Range

  • The typical range of a common 802.11g network with standard equipment is on the order of tens of metres. While sufficient for a typical home, it will be insufficient in a larger structure. To obtain additional range, repeaters or additional access points will have to be purchased. Costs for these items can add up quickly. Other technologies are in the development phase, however, which feature increased range, hoping to render this disadvantage irrelevant. 

Reliability

  • Like any radio frequency transmission, wireless networking signals are subject to a wide variety of interference, as well as complex propagation effects (such as multipath, or especially in this case Rician fading) that are beyond the control of the network administrator. Among the most insidious problems that can affect the stability and reliability of a wireless LAN are microwave ovens[8] and analog wireless transmitters such as baby monitors[9]. In the case of typical networks, modulation is achieved by complicated forms of phase-shift keying (PSK) or quadrature amplitude modulation (QAM), making interference and propagation effects all the more disturbing. As a result, important network resources such as servers are rarely connected wirelessly.

Speed

  • The speed on most wireless networks (typically 1-108 Mbit/s) is reasonably slow compared to the slowest common wired networks (100 Mbit/s up to several Gbit/s). There are also performance issues caused by TCP and its built-in congestion avoidance. For most users, however, this observation is irrelevant since the speed bottleneck is not in the wireless routing but rather in the outside network connectivity itself. For example, the maximum ADSL throughput (usually 8 Mbit/s or less) offered by telecommunications companies to general-purpose customers is already far slower than the slowest wireless network to which it is typically connected. That is to say, in most environments, a wireless network running at its slowest speed is still faster than the internet connection serving it in the first place. However, in specialized environments, higher throughput through a wired network might be necessary. Newer standards such as 802.11n are addressing this limitation and will support peak throughput in the range of 100-200 Mbit/s.

Radio Emissions

  • Wireless LANs utilize radio emissions for communication, which can cause interference in other devices and may have potentially deleterious effects on human health. See also electrosmog.