# Simple Reflex Agent: The simple reflex agent has:
Simply the simple reflex agent act only the basic of current percepts, that is which has no notion of history. The main characteristics of this agent are:
# Reflex Agent with state (Model-based reflex agent) This type of agent differ from simple reflex agent in that such agent maintain some sort of state based on the percept sequence. The state is updated regularly based on what the agent seems and the agent action keeping track of the state. The model based agent work as follow ;
# Goal based agent Goal based agent are similar to simple reflex agent which stores the information regarding its situation that are desirable. This allows the agent a way to choose among multiple possibilities selecting the one which reaches a goal state. # Utility based agent The utility based agent provides a more general agent frameworks. In case that multiple goals, this framework can accommodate different goals. Such systems are characterized by utility function that maps a state or a sequence of states to real value utility. #Additional Agents <<<
Click here to download: "Overview of Artificial Intelligence" |
Thursday, September 9, 2010
Types of agent
Artificial Agent
Artificial Agent The word "agent" is derived from the concept of agency which means employing someone to act on behalf of the users. An agent is something that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. An human agent has eyes, ears and other organs for sensors and body parts like hands, legs, mouth etc, for actuators, A software agent receives keystrokes, file contents and networking packets as sensory inputs and acts on the environment by displaying on the screen, writing files and sending network packets. Computer agent (or software agents) have several characteristics which distinguish them from every programs. They are:
In AI, we use the term "intelligent agent" which represent a new technology for software programs. AI is about the design and implementation of intelligent agents. Turing suggested that a system or agent can be said to be intelligent when the agent's performance cannto be distinguished from that of a human performing the same task. A better definition of artificial agent is: An intelligent agent is a software entity which senses its environment and then carries out some set of operations on behalf of a user (or a program), with some degree of autonomy, and in doing so employs some knowledge or representation of the user's goals or desires. In other words, AI are software programs which work in the background to carry out specific, repetitive, predictable task for an individual user, business processor software application. Structure of Agent A simple agent program can be defined mathematically by an agent function which maps every possible percept sequences to a possible action to the agent. That is, f : p * ---------> A So the agent program run on the physical architecture and that implement the agent function 'f'. That is, agent = Architecture + Program. For example: A Vacuum Cleaner:
The Pro log programming function RVF ([location, status]) return an action if (status == dirty) then return suck else if (location == A) then return right else return left |
Wednesday, September 8, 2010
Notes of Computer Networks
Subject Chapters for Computer Networks | Following Link Related Links for given chapters!!! |
---|---|
Data communication | |
Components of Data Communications System | |
Data representation | |
Data flow | |
Chapter 1 |
Notes of Algorithm Design & Analysis
Subject Chapter lists for Algorithm Des. & Ana. | Following Link Related Links for given Chapters |
---|---|
About Sorting | >> Go for related topic <<< |
Merge sort | >> Go for related topic <<< |
Selection sort | >> Go for related topic <<< |
Insertion sort | >> Go for related topic <<< |
Data Communications
Things to remember before going on computer networks To understand the concept of networking or network, first we have to understand the concept of data communications. Because the main purpose of the network is to transmit the data between all the computers connected to the network. To fully grasp about the network, we must understand the data communication components, how data can be represented in various form, and how to create a data flow. Data communications between the remote parties can be established through the process networking, by involving the connection of computers, networking devices, media etc. Mainly there are two types of networks: Local Area Networks(LANs) and Wide Area Networks(WANs). These two types of networks have distinct characteristic and different functionalities. Protocols and standards are other two main components of computer network and are vital to the implementation of data communications and networking. Protocols refers to the rules and standard is a protocol that has been adopted by the vendors and manufacturers.Network models serve to organize, unify, and control the hardware and software components of data communications and networking. |
Data Communications While we communicate, we are sharing something i.e, information. This sharing can be local or remote. Between individuals, local communications usually takes places face to face, while remote communication occurs over distance. The term telecommunication(taken from Greek word tele : "far"), which includes telephony, telegraphy, and television, means communication at a distance. The word data refers to information presented in whatever from which is shared among the networking devices connected to the networks. Data communication is the process of exchange of data between two devices through some form of transmission medium such as a wire cable. For data communications to occur, the communicating devices must be the part of the communication system made up of a combination of hardware (physical equipment) and software (programs). The effectiveness of the data communication system depends on four major and fundamental characteristics; delivery, accuracy, timeliness and jitter.
|
Simulation And Modeling
Subject Chapters for Simulation & Modeling | Following Link Related Links for given chapters!!! |
---|---|
Coming soon... | Coming soon... |
Coming soon... | Coming soon... |
Coming soon... | Coming soon... |
Coming soon... | Coming soon... |
Wireless Networking
Subject Chapters for Wireless Networking | Following Link Related Links for given chapters!!! |
---|---|
Coming soon... | Coming soon... |
Coming soon... | Coming soon... |
Coming soon... | Coming soon... |
Coming soon... | Coming soon... |
Artificial Intelligence
Subject Chapters forArtificial Intelligence | Following Link Related Links for given chapters |
---|---|
Coming soon... | Coming soon... |
Coming soon... | Coming soon... |
Coming soon... | Coming soon... |
Coming soon... | Coming soon... |
Data Flow
Simplex: In the simplex mode, the communication is unidirectional, as a one-way street. This meant data flow takes place towards only one direction at a time. Only one of two devices on a link can transmit the data and the other can only receive . Keyboards and traditional monitors are examples of simplex devices. The keyboard can only support the input, the monitor can only accept output. The simplex mode can use the entire capacity of the channel to send data in one direction. |
Half-Duplex In half-duplex mode, each node in the network can send and receive, but the same time. When one device is sending, the other can only receive, and the vice versa. The half-duplex mode is like one-lane road with traffic allowed in both directions. When cars are travelling in one direction, cars going the other way must wait. In a half-duplex transmission, the entire capacity of a channel is taken over by whichever of the two devices is transmitting at the time. Walkie-Talkie and CB (citizen band ) radios are both half-duplex systems. |
Full-Duplex In full-duplex mode (also called duplex), both stations can transmit and receive simultaneously. The full-duplex mode is like a two-way street with traffic flowing in both direction at the same time. In full-duplex mode, signals going in one direction share the capacity of the link with signals going in the other direction. This sharing can occur in two ways; either the link must contain two physically separate transmission paths, one for sending and the other for receiving; or the capacity of the channel is divided between signals traveling in both directions. |
Data Representation
Information today comes in different forms such as text, numbers, images, audio and video. Brief description about each of them is given below: Video: Video is the another form of the data that is transmitted over the computer networks. It refers to the recording or broadcasting of the pictures or videos. Video can either be produced as a continuous entity (e.g., by a TV camera), or it can be combination of the images, each a discrete entity, arranged to convey the idea or motion. We can change the video to a digital or an analog signal.
|
Data Communications
Data Communications While we communicate, we are sharing something i.e, information. This sharing can be local or remote. Between individuals, local communications usually takes places face to face, while remote communication occurs over distance. The term telecommunication(taken from Greek word tele : "far"), which includes telephony, telegraphy, and television, means communication at a distance. The word data refers to information presented in whatever from which is shared among the networking devices connected to the networks. Data communication is the process of exchange of data between two devices through some form of transmission medium such as a wire cable. For data communications to occur, the communicating devices must be the part of the communication system made up of a combination of hardware (physical equipment) and software (programs). The effectiveness of the data communication system depends on four major and fundamental characteristics; delivery, accuracy, timeliness and jitter.
|
Components of Data Communications System
Mainly, a data communications system consists of five components:
|
Monday, September 6, 2010
About Sorting
About Sorting: (Source : Wikipedia)
Sorting is any technique or process of arranging items in some sequence and/or in different sets.Accordingly, it has two common, yet distinct meanings:
- ordering: arranging items of the same kind, class, nature, etc. in some ordered sequence,
- categorizing: grouping and labeling items with similar properties together (by sorts).
For the sorting to be unique, these two are restricted to a total order and a strict total order, respectively.For sorting we can either specify a weak order "should not come after" or a strict weak order"should come before" (specifying one defines also the other, the two are the complement of the inverse of each other, see operations on binary relations).
Sorting n-tuples (depending on context also called e.g. records consisting of fields) can be done based on one or more of its components. More generally objects can be sorted based on a property. Such a component or property is called a sort key.
For example, the items are books, the sort key is the title, subject or author, and the order is alphabetical.
A new sort key can be created from two or more sort keys by lexicographical order. The first is then called the primary sort key, the second the secondary sort key, etc.
For example, addresses could be sorted using the city as primary sort key, and the street as secondary sort key.
If the sort key values are totally ordered, the sort key defines a weak order of the items: items with the same sort key are equivalent with respect to sorting. If different items have different sort key values then this defines a unique order of the items.
A standard order is often called ascending (corresponding to the fact that the standard order of numbers is ascending, i.e. A to Z, 0 to 9), the reverse order descending (Z to A, 9 to 0).
Now if you sort on different keys, then you get different lists of header information (such as the author's name) with the appended tailing records (such as title or publisher). Sorting in computer science is one of the most extensively researched subjects because of the need to speed up the operation on thousands or millions of records during a search operation; see sorting algorithm.
The main purpose of sorting information is to optimise its usefulness for specific tasks. In general, there are two ways of grouping information: by category e.g. a shopping catalogue where items are compiled together under headings such as 'home', 'sport & leisure', 'women's clothes' etc. (nominal scale) and by the intensity of some property, such as price, e.g. from the cheapest to most expensive (ordinal scale). Richard Saul Wurman, in his book Information Anxiety, proposes that the most common sorting purposes are Name, by Location and by Time (these are actually special cases of category and hierarchy).
Together these give the acronym LATCH (Location, Alphabetical, Time, Category, Hierarchy) and can be used to describe just about every type of ordered information.
Often information is sorted using different methods at different levels of abstraction: e.g. the Nepal telephone directories which are sorted by location, by category (business or residential) and then alphabetically. New media still subscribe to these basic sorting methods: e.g. a Google search returns a list of web pages in a hierarchical list based on its own scoring system for how closely they match the search criteria (from closest match downwards).
The opposite of sorting, rearranging a sequence of items in a random or meaningless order, is called shuffling.
Merge Sort
Merge Sort: (Source Wikipedia)
Merge sort is an O(n log n) comparison-based sorting algorithm. In most implementations it is stable, meaning that it preserves the input order of equal elements in the sorted output. It is an example of the divide and conquer algorithmic paradigm. It was invented by John von Neumann in 1945.
Example of merge sort sorting a list of random dots. | |
Class | Sorting algorithm |
---|---|
Data structure | Array |
Worst case performance | Θ(nlogn) |
Best case performance | Θ(nlogn) typical, Θ(n) natural variant |
Average case performance | Θ(nlogn) |
Worst case space complexity | Θ(n) auxiliary |
Algorithm
Conceptually, a merge sort works as follows
- If the list is of length 0 or 1, then it is already sorted. Otherwise:
- Divide the unsorted list into two sublists of about half the size.
- Sort each sublist recursively by re-applying merge sort.
- Merge the two sublists back into one sorted list.
Merge sort incorporates two main ideas to improve its runtime:
- A small list will take fewer steps to sort than a large list.
- Fewer steps are required to construct a sorted list from two sorted lists than two unsorted lists. For example, you only have to traverse each list once if they're already sorted (see the merge function below for an example implementation).
Suppose we have an array A with n indices ranging from A0 to An − 1. We apply merge sort to A(A0..Ac − 1) and A(Ac..An − 1) where c is the integer part of n / 2. When the two halves are returned they will have been sorted. They can now be merged together to form a sorted array.
function merge_sort(m)
if length(m) ≤ 1
return m
var list left, right, resultvar integer middle = length(m) / 2for each x in m up to middle
add x to left
for each x in m after middle
add x to right
left = merge_sort(left)
right = merge_sort(right)
result = merge(left, right)
return result
Following writing merge_sort function, then it is required to merge both the left and right lists created above. There are several variants for the
merge()
function; one possibility is this:function merge(left,right)
var list resultwhile length(left) > 0 or length(right) > 0
if length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
left = rest(left)append first(left) to result
else
right = rest(right)append first(right) to result
else if length(left) > 0
left = rest(left)append first(left) to result
else if length(right) > 0
right = rest(right)append first(right) to result
end while
return result
Analysis
In sorting n objects, merge sort has an average and worst-case performance of O(n log n). If the running time of merge sort for a list of length n is T(n), then the recurrence T(n) = 2T(n/2) + n follows from the definition of the algorithm (apply the algorithm to two lists of half the size of the original list, and add the n steps taken to merge the resulting two lists). The closed form follows from the master theorem.
In the worst case, merge sort does an amount of comparisons equal to or slightly smaller than (n ⌈lg n⌉ - 2⌈lg n⌉ + 1), which is between (n lg n - n + 1) and (n lg n + n + O(lg n)).[1]
For large n and a randomly ordered input list, merge sort's expected (average) number of comparisons approaches α·n fewer than the worst case where
In the worst case, merge sort does about 39% fewer comparisons than quicksort does in the average case; merge sort always makes fewer comparisons than quicksort, except in extremely rare cases, when they tie, where merge sort's worst case is found simultaneously with quicksort's best case. In terms of moves, merge sort's worst case complexity is O(n log n)—the same complexity as quicksort's best case, and merge sort's best case takes about half as many iterations as the worst case.
Recursive implementations of merge sort make 2n − 1 method calls in the worst case, compared to quicksort's n, thus merge sort has roughly twice as much recursive overhead as quicksort. However, iterative, non-recursive implementations of merge sort, avoiding method call overhead, are not difficult to code. Merge sort's most common implementation does not sort in place; therefore, the memory size of the input must be allocated for the sorted output to be stored in (see below for versions that need only n/2 extra spaces).
Merge sort as described here also has an often overlooked, but practically important, best-case property. If the input is already sorted, its complexity falls to O(n). Specifically, n-1 comparisons and zero moves are performed, which is the same as for simply running through the input, checking if it is pre-sorted.
Sorting in-place is possible (e.g., using lists rather than arrays) but is very complicated, and will offer little performance gains in practice, even if the algorithm runs in O(n log n) time. (Katajainen, Pasanen & Teuhola 1996) In these cases, algorithms like heapsort usually offer comparable speed, and are far less complex. Additionally, unlike the standard merge sort, in-place merge sort is not a stable sort. In the case of linked lists the algorithm does not use more space than that the already used by the list representation, but the O(log(k)) used for the recursion trace.
Merge sort is more efficient than quick sort for some types of lists if the data to be sorted can only be efficiently accessed sequentially, and is thus popular in languages such as Lisp, where sequentially accessed data structures are very common. Unlike some (efficient) implementations of quicksort, merge sort is a stable sort as long as the merge operation is implemented properly.
As can be seen from the procedure merge sort, there are some demerits. One complaint we might raise is its use of 2n locations; the additional n locations were needed because one couldn't reasonably merge two sorted sets in place. But despite the use of this space the algorithm must still work hard: The contents of m are first copied into left and right and later into the list result on each invocation of merge_sort (variable names according to the pseudocode above). An alternative to this copying is to associate a new field of information with each key (the elements in m are called keys). This field will be used to link the keys and any associated information together in a sorted list (a key and its related information is called a record). Then the merging of the sorted lists proceeds by changing the link values; no records need to be moved at all. A field which contains only a link will generally be smaller than an entire record so less space will also be used.
Another alternative for reducing the space overhead to n/2 is to maintain left and right as a combined structure, copy only the left part of m into temporary space, and to direct the merge routine to place the merged output into m. With this version it is better to allocate the temporary space outside the merge routine, so that only one allocation is needed. The excessive copying mentioned in the previous paragraph is also mitigated, since the last pair of lines before the return result statement (function merge in the pseudo code above) become superfluous.
Merge sorting tape drives
Merge sort is so inherently sequential that it is practical to run it using slow tape drives as input and output devices. It requires very little memory, and the memory required does not depend on the number of data elements.
For the same reason it is also useful for sorting data on disk that is too large to fit entirely into primary memory. On tape drives that can run both backwards and forwards, merge passes can be run in both directions, avoiding rewind time.
If you have four tape drives, it works as follows:
- Divide the data to be sorted in half and put half on each of two tapes
- Merge individual pairs of records from the two tapes; write two-record chunks alternately to each of the two output tapes
- Merge the two-record chunks from the two output tapes into four-record chunks; write these alternately to the original two input tapes
- Merge the four-record chunks into eight-record chunks; write these alternately to the original two output tapes
- Repeat until you have one chunk containing all the data, sorted --- that is, for log n passes, where n is the number of records.
For almost-sorted data on tape, a bottom-up "natural merge sort" variant of this algorithm is popular.
The bottom-up "natural merge sort" merges whatever "chunks" of in-order records are already in the data. In the worst case (reversed data), "natural merge sort" performs the same as the above—it merges individual records into 2-record chunks, then 2-record chunks into 4-record chunks, etc. In the best case (already mostly-sorted data), "natural merge sort" merges large already-sorted chunks into even larger chunks, hopefully finishing in fewer than log n passes.
# Original data is on the input tape; the other tapes are blank
function merge_sort(input_tape, output_tape, scratch_tape_C, scratch_tape_D)
while any records remain on the input_tape
while any records remain on the input_tape
merge( input_tape, output_tape, scratch_tape_D)merge( input_tape, output_tape, scratch_tape_C)
while any records remain on C or D
merge( scratch_tape_C, scratch_tape_D, input_tape)merge( scratch_tape_C, scratch_tape_D, output_tape)
# take the next sorted chunk from the input tapes, and merge int orently under the read head of that tape. # tape[current] gives the record previously under the read head of that tape. # (Generally both tape[current] and tape[previous] are buffered in RAM ...)the single given output_tape. # tapes are scanned linearly. # tape[next] gives the record cur
function merge(left[], right[], output_tape[])
do
if left[current] ≤ right[current]
read next record from left tapeappend left[current] to output_tape
else
read next record from right tapeappend right[current] to output_tape
while left[current] < left[next] and right[current] < right[next]
if left[current] < left[next]
append current_left_record to output_tape
if right[current] < right[next]
append current_right_record to output_tape
return
Either form of merge sort can be generalized to any number of tapes.
Optimizing merge sort
On modern computers, locality of reference can be of paramount importance in software optimization, because multi-level memory hierarchies are used. Cache-aware versions of the merge sort algorithm, whose operations have been specifically chosen to minimize the movement of pages in and out of a machine's memory cache, have been proposed. For example, the tiled merge sort algorithm stops partitioning subarrays when subarrays of size S are reached, where S is the number of data items fitting into a single page in memory. Each of these subarrays is sorted with an in-place sorting algorithm, to discourage memory swaps, and normal merge sort is then completed in the standard recursive fashion. This algorithm has demonstrated better performance on machines that benefit from cache optimization. (LaMarca & Ladner 1997)
Kronrod (1969) suggested an alternative version of merge sort that uses constant additional space. This algorithm was refined by Katajainen, Pasanen & Teuhola (1996).
Comparison with other sort algorithms
Although heapsort has the same time bounds as merge sort, it requires only Θ(1) auxiliary space instead of merge sort's Θ(n), and is often faster in practical implementations. On typical modern architectures, efficient quicksort implementations generally outperform mergesort for sorting RAM-based arrays. On the other hand, merge sort is a stable sort, parallelizes better, and is more efficient at handling slow-to-access sequential media. Merge sort is often the best choice for sorting a linked list: in this situation it is relatively easy to implement a merge sort in such a way that it requires only Θ(1) extra space, and the slow random-access performance of a linked list makes some other algorithms (such as quicksort) perform poorly, and others (such as heapsort) completely impossible.
As of Perl 5.8, merge sort is its default sorting algorithm (it was quicksort in previous versions of Perl). In Java, the Arrays.sort() methods use merge sort or a tuned quicksort depending on the datatypes and for implementation efficiency switch to insertion sort when fewer than seven array elements are being sorted.[2] Python uses timsort, another tuned hybrid of merge sort and insertion sort, which will also become the standard sort algorithm for Java SE 7.
Utility in online sorting
Merge sort's merge operation is useful in online sorting, where the list to be sorted is received a piece at a time, instead of all at the beginning. In this application, we sort each new piece that is received using any sorting algorithm, and then merge it into our sorted list so far using the merge operation. However, this approach can be expensive in time and space if the received pieces are small compared to the sorted list — a better approach in this case is to store the list in a self-balancing binary search tree and add elements to it as they are received.
Subscribe to:
Posts (Atom)