Sampling
The process of grabbing the sound into small increments.
These small increments will later form a series of discrete values coded in binary format
Sampling Rate: How often the sound is grabbed, The higher the sampling rate, the more parts are grabbed, the better quality of sound & bigger file size
(Unit: kHz)
Sampling Rate most often used are 22.05 KHz (most common), 44.1KHz (common for audio CD), and 11.025KHz
Quantization
The process of rounding off a continuous value horizontally so that it can be represented by a fixed no of binary bits
Unit: bits (can use 8-bit, 16-bit and so on to quantize sound)
If u choose 8-bit of quantization level to quantize the analog sound, mean the analog sound will be divided into 256 parts.
Similarly, sound quantized at 16-bit divides the waves into 65,536 parts per second, just like 8-bit graphics can convey 256 colors and 16-bits can display 65,536 types of colors.
The larger the bit, the better the sound, the larger the file size
INFORMATION ABOUT COMPUTER SCIENCE(SOFTWARE ENGI)TECHNOLOGY & MEDICAL SCIENCE
Monday, January 18, 2010
Digitization
1.Our world and bodies are analog, functioning in a smooth and continuous flow (analog signal)
eg water current, wind, the flow of bloods etc.
2.The digital world is made of little chunks (digital signal)
Digital Signal
.The representation of info as a series of numbers
.A sequence of discrete values coded in binary format
.Humans deal with analog info
.Humans only perceive digital info when it has been transformed into analog domain
.Computer can only generate and accept info in digital form
.Therefore, we need “Digitization”
Digitization (Devices)
Two devices that allow human and computer interact: ADC and DAC
Analogue-to-Digital Converter (ADC):
Performs the process of converting from analog sound to digital sound
Digital-to-Analog Converter (DAC):
Performs the re-conversion back to analogue sound from digital sound
Found in mm hardware: sound card, audio recorders, graphic cards, video recorder, CD-audio players, printer, monitor, network card etc.
Digitization (Process)
Digitization (The transformation of analog signals into digital signals) requires 2 successive steps:
1.Sampling
2.Quantisation (resolution)
eg water current, wind, the flow of bloods etc.
2.The digital world is made of little chunks (digital signal)
Digital Signal
.The representation of info as a series of numbers
.A sequence of discrete values coded in binary format
.Humans deal with analog info
.Humans only perceive digital info when it has been transformed into analog domain
.Computer can only generate and accept info in digital form
.Therefore, we need “Digitization”
Digitization (Devices)
Two devices that allow human and computer interact: ADC and DAC
Analogue-to-Digital Converter (ADC):
Performs the process of converting from analog sound to digital sound
Digital-to-Analog Converter (DAC):
Performs the re-conversion back to analogue sound from digital sound
Found in mm hardware: sound card, audio recorders, graphic cards, video recorder, CD-audio players, printer, monitor, network card etc.
Digitization (Process)
Digitization (The transformation of analog signals into digital signals) requires 2 successive steps:
1.Sampling
2.Quantisation (resolution)
Types of sound
Sound effects
Message reinforcing: e.g. when discussing topics on nature, sounds of birds, waves etc. can enhance the message
Music
Narration: a voice describes some facts that pertaining to the topic
Voice-overs: Not to be confused with narration, this type of content sound is used in instances where short instructions may be necessary for the user to navigate the multimedia application.
Speech
Singing: Combines characteristics of speech and music
Message reinforcing: e.g. when discussing topics on nature, sounds of birds, waves etc. can enhance the message
Music
Narration: a voice describes some facts that pertaining to the topic
Voice-overs: Not to be confused with narration, this type of content sound is used in instances where short instructions may be necessary for the user to navigate the multimedia application.
Speech
Singing: Combines characteristics of speech and music
Characteristics of Sound (Frequency & Amplitude)
Two important sound characteristics:
frequency and amplitude
Frequency:
The number of cycles a sound wave creates in one second (pitch)
A cycle is measured from one wave peak to another
Unit: Herts (Hz) or cycles per second (cps)
Amplitude:
The volume or loudness of a particular sound
The louder the sound, the higher the amplitude will be
Unit: decibel (dB)
frequency and amplitude
Frequency:
The number of cycles a sound wave creates in one second (pitch)
A cycle is measured from one wave peak to another
Unit: Herts (Hz) or cycles per second (cps)
Amplitude:
The volume or loudness of a particular sound
The louder the sound, the higher the amplitude will be
Unit: decibel (dB)
INTRO TO Sound:
Sound:
.Fluctuations in air pressure (Sound is vibrations in the air) that can be perceived by our ears with some qualitative attribute
.Produced by a source that creates vibration in the air
.The pattern of oscillation is called a waveform
.Fluctuations in air pressure (Sound is vibrations in the air) that can be perceived by our ears with some qualitative attribute
.Produced by a source that creates vibration in the air
.The pattern of oscillation is called a waveform
Software Interrupts
Software in are used by programs to request system services
These interrupts are treated in the same way as the interrupts for the Hardware devises are\
In assembly we useINT instruction to perform such interruptsSyntax: name INT interrupt-number;commentsSyntax (simplified): INT interrupt-number Example: INT 21h
These interrupts are used with there functions
Explanation
In order to use I/O operations we use interrupt 21h,
But in order to perform a specific task, like printing a string to a standard output devicewe have to use its function 09h
Example code
MOV AH 09h
LEA DX, string
INT 21h
These interrupts are treated in the same way as the interrupts for the Hardware devises are\
In assembly we useINT instruction to perform such interruptsSyntax: name INT interrupt-number;commentsSyntax (simplified): INT interrupt-number Example: INT 21h
These interrupts are used with there functions
Explanation
In order to use I/O operations we use interrupt 21h,
But in order to perform a specific task, like printing a string to a standard output devicewe have to use its function 09h
Example code
MOV AH 09h
LEA DX, string
INT 21h
Hardware Interrupt
..When an interrupt is generated by hardware
.It sends a signal request to the processor
.The processor suspends the current tasks it is executing
.Then the control is transferred to the interrupt routine
.Then the interrupt routine performs some I/O operations depending on which generated by the hardware
.Finally the control is transferred back to the previous executing task at the point where it was suspended.
.It sends a signal request to the processor
.The processor suspends the current tasks it is executing
.Then the control is transferred to the interrupt routine
.Then the interrupt routine performs some I/O operations depending on which generated by the hardware
.Finally the control is transferred back to the previous executing task at the point where it was suspended.
How does interrupts work !?
.When the hardware needs service it will request an interrupt.
.A thread is defined as the path of action of software as it executes.
.The execution of the interrupt service routine is called a background thread.
.This thread is created by the hardware interrupt request
.The thread is killed when the interrupt service routine executes the instruction.
.A new thread is created for each interrupt request. It is important to consider each individual request as a separate thread because local variables and registers used in the interrupt service routine are unique and separate from one interrupt event to the next.
.In a multithreaded system we consider the threads as cooperating to perform an overall task.
.When an interrupt is generated
.It sends a signal request to the processor
.The processor suspends the current tasks it is executing
.Then the control is transferred to the interrupt routine
.Then the interrupt routine performs some I/O operations depending on which interrupt function is called or generated by the hardware
.Finally the control is transferred back to the previous executing task at the point
where it was suspended.
.A thread is defined as the path of action of software as it executes.
.The execution of the interrupt service routine is called a background thread.
.This thread is created by the hardware interrupt request
.The thread is killed when the interrupt service routine executes the instruction.
.A new thread is created for each interrupt request. It is important to consider each individual request as a separate thread because local variables and registers used in the interrupt service routine are unique and separate from one interrupt event to the next.
.In a multithreaded system we consider the threads as cooperating to perform an overall task.
.When an interrupt is generated
.It sends a signal request to the processor
.The processor suspends the current tasks it is executing
.Then the control is transferred to the interrupt routine
.Then the interrupt routine performs some I/O operations depending on which interrupt function is called or generated by the hardware
.Finally the control is transferred back to the previous executing task at the point
where it was suspended.
Introduction to Interrupts
Introduction to Interrupts
An interrupt is the automatic transfer of software execution in response to hardware that is asynchronous with the current software execution.
There are 3 main categories of interrupts, Hardware InterruptSoftware InterruptProcessor Exception
Interrupts were originally created to allow hardware devices to interrupt the operations of the CPU.
An interrupt is the automatic transfer of software execution in response to hardware that is asynchronous with the current software execution.
There are 3 main categories of interrupts, Hardware InterruptSoftware InterruptProcessor Exception
Interrupts were originally created to allow hardware devices to interrupt the operations of the CPU.
Thursday, January 14, 2010
Advantages&Disadvantages of DBMSs
Advantages of DBMSs• Control of data redundancy
• Data consistency
• More information from the same amount of data
• Sharing of data
• Improved data integrity
• Improved security
• Enforcement of standards
• Economy of scale
• Balanced conflicting requirements
• Improved data accessibility and responsiveness
• Increased productivity
• Improved maintenance through data independence
• Increased concurrency
• Improved backup and recovery services
Disadvantages of DBMSs•
Complexity
• Size
• Cost of DBMS
• Additional hardware costs
• Cost of conversion
• Performance
• Higher impact of a failure
• Data consistency
• More information from the same amount of data
• Sharing of data
• Improved data integrity
• Improved security
• Enforcement of standards
• Economy of scale
• Balanced conflicting requirements
• Improved data accessibility and responsiveness
• Increased productivity
• Improved maintenance through data independence
• Increased concurrency
• Improved backup and recovery services
Disadvantages of DBMSs•
Complexity
• Size
• Cost of DBMS
• Additional hardware costs
• Cost of conversion
• Performance
• Higher impact of a failure
Components of the Database Environment
Components of the Database Environment•
Hardware
– Can range from a PC to a network of computers containing secondary storage volumes and hardware processor(s) with associated main memory that are used to support execution of database management system
• Software
– DBMS, operating system, network software (if necessary) and also the application programs.
• Data
– Used by the organization and a description of this data called the schema. The data as discussed above is integrated and shared
– By integrated it is meant that the data is actually a unification of several files with redundancy among files partially eliminated
– By Shared it is meant that individual pieces of data in the database can be shared among different users.
• Procedures
– Instructions and rules that should be applied to the design and use of the database and DBMS.
• People
– The people that participate in the database environment.
– Including
• Application Programmers who are responsible for writing database applications
• End Users are people who interact with database system from workstations and terminals in order view and use data to complete their routine tasks.
• Data Administrator is the person who is responsible for deciding what data is important and should be recorded. This person belongs to senior management level (normally not a technician) and understands what is important for the enterprise. He is also responsible for defining various policies related to data including security policy.
• Database Administrator: This is a technical person responsible for implementing the policies defined by the data administrator. DBA is also responsible for ensuring that the system operates with adequate performance and for providing a variety of technical services.
Hardware
– Can range from a PC to a network of computers containing secondary storage volumes and hardware processor(s) with associated main memory that are used to support execution of database management system
• Software
– DBMS, operating system, network software (if necessary) and also the application programs.
• Data
– Used by the organization and a description of this data called the schema. The data as discussed above is integrated and shared
– By integrated it is meant that the data is actually a unification of several files with redundancy among files partially eliminated
– By Shared it is meant that individual pieces of data in the database can be shared among different users.
• Procedures
– Instructions and rules that should be applied to the design and use of the database and DBMS.
• People
– The people that participate in the database environment.
– Including
• Application Programmers who are responsible for writing database applications
• End Users are people who interact with database system from workstations and terminals in order view and use data to complete their routine tasks.
• Data Administrator is the person who is responsible for deciding what data is important and should be recorded. This person belongs to senior management level (normally not a technician) and understands what is important for the enterprise. He is also responsible for defining various policies related to data including security policy.
• Database Administrator: This is a technical person responsible for implementing the policies defined by the data administrator. DBA is also responsible for ensuring that the system operates with adequate performance and for providing a variety of technical services.
INTRO OF Database Management System & VIEWS
Database Management System (DBMS)• A software system that enables users to define, create, and maintain the database and that provides controlled access to this database.
Views•
Allows each user to have his or her own view of the database.
• A view is essentially some subset of the database.
• Benefits include:
• Reduce complexity;
• Provide a level of security;
• Provide a mechanism to customize the appearance of the database;
• Present a consistent, unchanging picture of the structure of the database, even if the underlying database is changed.
Views•
Allows each user to have his or her own view of the database.
• A view is essentially some subset of the database.
• Benefits include:
• Reduce complexity;
• Provide a level of security;
• Provide a mechanism to customize the appearance of the database;
• Present a consistent, unchanging picture of the structure of the database, even if the underlying database is changed.
Database Approach
Database Approach
• Arose because:
– Definition of data was embedded in application programs, rather than being stored separately and independently.
– No control over access and manipulation of data beyond that imposed by application programs.
• Result:
– the database and Database Management System (DBMS).
Database Approach
• Data definition language (DDL).
– Permits specification of data types, structures and any data constraints.
– All specifications are stored in the database.
• Data manipulation language (DML).
– General enquiry facility (query language) of the data.
• Controlled access to database may include
– A security system.
– An integrity system.
– A concurrency control system.
– A recovery control system.
– A user-accessible catalog.
• A view mechanism.
– Provides users with only the data they want or need to use.
• Arose because:
– Definition of data was embedded in application programs, rather than being stored separately and independently.
– No control over access and manipulation of data beyond that imposed by application programs.
• Result:
– the database and Database Management System (DBMS).
Database Approach
• Data definition language (DDL).
– Permits specification of data types, structures and any data constraints.
– All specifications are stored in the database.
• Data manipulation language (DML).
– General enquiry facility (query language) of the data.
• Controlled access to database may include
– A security system.
– An integrity system.
– A concurrency control system.
– A recovery control system.
– A user-accessible catalog.
• A view mechanism.
– Provides users with only the data they want or need to use.
File-based Systems& Limitations of File-Based Approach
File-based Systems• Collection of application programs that perform services for the end users (e.g. reports).
• Each program defines and manages its own data
• E.g. is a C++ system that accepts and stores data. In such case the sequence in which the fields are recorded is coded in the program, not in the file.
Limitations of File-Based Approach
• Separation and isolation of data
– Each program maintains its own set of data.
– Users of one program may be unaware of potentially useful data held by other
programs.
– Duplication of data
– Same data is held by different programs.
– Wasted space and potentially different values and/or different formats for the same item.
• Duplication of data
– Same data is held by different programs.
– Wasted space and potentially different values and/or different formats for the same item.
• Each program defines and manages its own data
• E.g. is a C++ system that accepts and stores data. In such case the sequence in which the fields are recorded is coded in the program, not in the file.
Limitations of File-Based Approach
• Separation and isolation of data
– Each program maintains its own set of data.
– Users of one program may be unaware of potentially useful data held by other
programs.
– Duplication of data
– Same data is held by different programs.
– Wasted space and potentially different values and/or different formats for the same item.
• Duplication of data
– Same data is held by different programs.
– Wasted space and potentially different values and/or different formats for the same item.
Introducing Database
Introducing Database:
A collection of computerized data files. In simple words it is computerized record keeping.
Examples of Database Applications
• Purchases from the supermarket
• Purchases using your credit card
• Booking a holiday at the travel agents
• Using the local library
• Taking out insurance
• Using the Internet
• Studying at university
Formal definition of Database• Shared collection of logically related data (and a description of this data), designed to meet the information needs of an organization.
• System catalog (metadata) provides description of data to enable program–data independence.
• Logically related data comprises entities, attributes, and relationships of an organization’s information.
A collection of computerized data files. In simple words it is computerized record keeping.
Examples of Database Applications
• Purchases from the supermarket
• Purchases using your credit card
• Booking a holiday at the travel agents
• Using the local library
• Taking out insurance
• Using the Internet
• Studying at university
Formal definition of Database• Shared collection of logically related data (and a description of this data), designed to meet the information needs of an organization.
• System catalog (metadata) provides description of data to enable program–data independence.
• Logically related data comprises entities, attributes, and relationships of an organization’s information.
The Relational Data Model( database)
The Relational Data Model
The Relational Data Model has the relation at its heart, but then a whole series of rules governing keys, relationships, joins, functional dependencies, transitive dependencies, multi-valued dependencies, and modification anomalies.
The RelationThe Relation is the basic element in a relational data model.
A relation is subject to the following rules:
1. Relation (file, table) is a two-dimensional table.
2. Attribute (i.e. field or data item) is a column in the table.
3. Each column in the table has a unique name within that table.
4. Each column is homogeneous. Thus the entries in any column are all of the same type (e.g. age, name, employee-number, etc).
5. Each column has a domain, the set of possible values that can appear in that column.
6. A Tuple (i.e. record) is a row in the table.
7. The order of the rows and columns is not important.
8. Values of a row all relate to some thing or portion of a thing.
9. Repeating groups (collections of logically related attributes that occur multiple times within one record occurrence) are not allowed.
10. Duplicate rows are not allowed (candidate keys are designed to prevent this).
11. Cells must be single-valued (but can be variable length). Single valued means the following:
o Cannot contain multiple values such as 'A1,B2,C3'.
o Cannot contain combined values such as 'ABC-XYZ' where 'ABC' means one thing and 'XYZ' another.
A relation may be expressed using the notation R(A,B,C, ...) where:
• R = the name of the relation.
• (A,B,C, ...) = the attributes within the relation.
• A = the attribute(s) which form the primary key.
Keys
1. A simple key contains a single attribute.
2. A composite key is a key that contains more than one attribute.
3. A candidate key is an attribute (or set of attributes) that uniquely identifies a row. A candidate key must possess the following properties:
o Unique identification - For every row the value of the key must uniquely identify that row.
o Non redundancy - No attribute in the key can be discarded without destroying the property of unique identification.
4. A primary key is the candidate key which is selected as the principal unique identifier. Every relation must contain a primary key. The primary key is usually the key selected to identify a row when the database is physically implemented. For example, a part number is selected instead of a part description.
5. A superkey is any set of attributes that uniquely identifies a row. A superkey differs from a candidate key in that it does not require the non redundancy property.
6. A foreign key is an attribute (or set of attributes) that appears (usually) as a non key attribute in one relation and as a primary key attribute in another relation. I say usually because it is possible for a foreign key to also be the whole or part of a primary key:
o A many-to-many relationship can only be implemented by introducing an intersection or link table which then becomes the child in two one-to-many relationships. The intersection table therefore has a foreign key for each of its parents, and its primary key is a composite of both foreign keys.
o A one-to-one relationship requires that the child table has no more than one occurrence for each parent, which can only be enforced by letting the foreign key also serve as the primary key.
7. A semantic or natural key is a key for which the possible values have an obvious meaning to the user or the data. For example, a semantic primary key for a COUNTRY entity might contain the value 'USA' for the occurrence describing the United States of America. The value 'USA' has meaning to the user.
8. A technical or surrogate or artificial key is a key for which the possible values have no obvious meaning to the user or the data. These are used instead of semantic keys for any of the following reasons:
o When the value in a semantic key is likely to be changed by the user, or can have duplicates. For example, on a PERSON table it is unwise to use PERSON_NAME as the key as it is possible to have more than one person with the same name, or the name may change such as through marriage.
o When none of the existing attributes can be used to guarantee uniqueness. In this case adding an attribute whose value is generated by the system, e.g from a sequence of numbers, is the only way to provide a unique value. Typical examples would be ORDER_ID and INVOICE_ID. The value '12345' has no meaning to the user as it conveys nothing about the entity to which it relates.
9. A key functionally determines the other attributes in the row, thus it is always a determinant.
10. Note that the term 'key' in most DBMS engines is implemented as an index which does not allow duplicate entries.
Relationships
One table (relation) may be linked with another in what is known as a relationship. Relationships may be built into the database structure to facilitate the operation of relational joins at runtime.
1. A relationship is between two tables in what is known as a one-to-many or parent-child or master-detail relationship where an occurrence on the 'one' or 'parent' or 'master' table may have any number of associated occurrences on the 'many' or 'child' or 'detail' table. To achieve this the child table must contain fields which link back the primary key on the parent table. These fields on the child table are known as a foreign key, and the parent table is referred to as the foreign table (from the viewpoint of the child).
2. It is possible for a record on the parent table to exist without corresponding records on the child table, but it should not be possible for an entry on the child table to exist without a corresponding entry on the parent table.
3. A child record without a corresponding parent record is known as an orphan.
4. It is possible for a table to be related to itself. For this to be possible it needs a foreign key which points back to the primary key. Note that these two keys cannot be comprised of exactly the same fields otherwise the record could only ever point to itself.
5. A table may be the subject of any number of relationships, and it may be the parent in some and the child in others.
6. Some database engines allow a parent table to be linked via a candidate key, but if this were changed it could result in the link to the child table being broken.
7. Some database engines allow relationships to be managed by rules known as referential integrity or foreign key restraints. These will prevent entries on child tables from being created if the foreign key does not exist on the parent table, or will deal with entries on child tables when the entry on the parent table is updated or deleted.
Determinant and Dependent
The terms determinant and dependent can be described as follows:
1. The expression X Y means 'if I know the value of X, then I can obtain the value of Y' (in a table or somewhere).
2. In the expression X Y, X is the determinant and Y is the dependent attribute.
3. The value X determines the value of Y.
4. The value Y depends on the value of X.
Functional Dependencies (FD)
A functional dependency can be described as follows:
1. An attribute is functionally dependent if its value is determined by another attribute which is a key.
2. That is, if we know the value of one (or several) data items, then we can find the value of another (or several).
3. Functional dependencies are expressed as X Y, where X is the determinant and Y is the functionally dependent attribute.
4. If A (B,C) then A B and A C.
5. If (A,B) C, then it is not necessarily true that A C and B C.
6. If A B and B A, then A and B are in a 1-1 relationship.
7. If A B then for A there can only ever be one value for B.
Transitive Dependencies (TD)
A transitive dependency can be described as follows:
1. An attribute is transitively dependent if its value is determined by another attribute which is not a key.
2. If X Y and X is not a key then this is a transitive dependency.
3. A transitive dependency exists when A B C but NOT A C.
Multi-Valued Dependencies (MVD)A multi-valued dependency can be
described as follows:
1. A table involves a multi-valued dependency if it may contain multiple values for an entity.
2. A multi-valued dependency may arise as a result of enforcing 1st normal form.
3. X Y, ie X multi-determines Y, when for each value of X we can have more than one value of Y.
4. If A B and A C then we have a single attribute A which multi-determines two other independent attributes, B and C.
5. If A (B,C) then we have an attribute A which multi-determines a set of associated attributes, B and C.
Types of Relational Join
A JOIN is a method of creating a result set that combines rows from two or more tables (relations). When comparing the contents of two tables the following conditions may occur:
• Every row in one relation has a match in the other relation.
• Relation R1 contains rows that have no match in relation R2.
• Relation R2 contains rows that have no match in relation R1.
INNER joins contain only matches. OUTER joins may contain mismatches as well.
Inner Join
This is sometimes known as a simple join. It returns all rows from both tables where there is a match. If there are rows in R1 which do not have matches in R2, those rows will not be listed. There are two possible ways of specifying this type of join:
SELECT * FROM R1, R2 WHERE R1.r1_field = R2.r2_field;
SELECT * FROM R1 INNER JOIN R2 ON R1.field = R2.r2_field
If the fields to be matched have the same names in both tables then the ON condition, as in:
ON R1.fieldname = R2.fieldname
ON (R1.field1 = R2.field1 AND R1.field2 = R2.field2)
can be replaced by the shorter USING condition, as in:
USING fieldname
USING (field1, field2)
Natural Join
A natural join is based on all columns in the two tables that have the same name. It is semantically equivalent to an INNER JOIN or a LEFT JOIN with a USING clause that names all columns that exist in both tables.
SELECT * FROM R1 NATURAL JOIN R2
The alternative is a keyed join which includes an ON or USING condition.
Left [Outer] Join
Returns all the rows from R1 even if there are no matches in R2. If there are no matches in R2 then the R2 values will be shown as null.
SELECT * FROM R1 LEFT [OUTER] JOIN R2 ON R1.field = R2.field
Right [Outer] Join
Returns all the rows from R2 even if there are no matches in R1. If there are no matches in R1 then the R1 values will be shown as null.
SELECT * FROM R1 RIGHT [OUTER] JOIN R2 ON R1.field = R2.field
Full [Outer] Join
Returns all the rows from both tables even if there are no matches in one of the tables. If there are no matches in one of the tables then its values will be shown as null.
SELECT * FROM R1 FULL [OUTER] JOIN R2 ON R1.field = R2.field
Self Join
This joins a table to itself. This table appears twice in the FROM clause and is followed by table aliases that qualify column names in the join condition.
SELECT a.field1, b.field2 FROM R1 a, R1 b WHERE a.field = b.field
Cross Join
This type of join is rarely used as it does not have a join condition, so every row of R1 is joined to every row of R2. For example, if both tables contain 100 rows the result will be 10,000 rows. This is sometimes known as a cartesian product and can be specified in either one of the following ways:
SELECT * FROM R1 CROSS JOIN R2
SELECT * FROM R1, R2
The Relational Data Model has the relation at its heart, but then a whole series of rules governing keys, relationships, joins, functional dependencies, transitive dependencies, multi-valued dependencies, and modification anomalies.
The RelationThe Relation is the basic element in a relational data model.
A relation is subject to the following rules:
1. Relation (file, table) is a two-dimensional table.
2. Attribute (i.e. field or data item) is a column in the table.
3. Each column in the table has a unique name within that table.
4. Each column is homogeneous. Thus the entries in any column are all of the same type (e.g. age, name, employee-number, etc).
5. Each column has a domain, the set of possible values that can appear in that column.
6. A Tuple (i.e. record) is a row in the table.
7. The order of the rows and columns is not important.
8. Values of a row all relate to some thing or portion of a thing.
9. Repeating groups (collections of logically related attributes that occur multiple times within one record occurrence) are not allowed.
10. Duplicate rows are not allowed (candidate keys are designed to prevent this).
11. Cells must be single-valued (but can be variable length). Single valued means the following:
o Cannot contain multiple values such as 'A1,B2,C3'.
o Cannot contain combined values such as 'ABC-XYZ' where 'ABC' means one thing and 'XYZ' another.
A relation may be expressed using the notation R(A,B,C, ...) where:
• R = the name of the relation.
• (A,B,C, ...) = the attributes within the relation.
• A = the attribute(s) which form the primary key.
Keys
1. A simple key contains a single attribute.
2. A composite key is a key that contains more than one attribute.
3. A candidate key is an attribute (or set of attributes) that uniquely identifies a row. A candidate key must possess the following properties:
o Unique identification - For every row the value of the key must uniquely identify that row.
o Non redundancy - No attribute in the key can be discarded without destroying the property of unique identification.
4. A primary key is the candidate key which is selected as the principal unique identifier. Every relation must contain a primary key. The primary key is usually the key selected to identify a row when the database is physically implemented. For example, a part number is selected instead of a part description.
5. A superkey is any set of attributes that uniquely identifies a row. A superkey differs from a candidate key in that it does not require the non redundancy property.
6. A foreign key is an attribute (or set of attributes) that appears (usually) as a non key attribute in one relation and as a primary key attribute in another relation. I say usually because it is possible for a foreign key to also be the whole or part of a primary key:
o A many-to-many relationship can only be implemented by introducing an intersection or link table which then becomes the child in two one-to-many relationships. The intersection table therefore has a foreign key for each of its parents, and its primary key is a composite of both foreign keys.
o A one-to-one relationship requires that the child table has no more than one occurrence for each parent, which can only be enforced by letting the foreign key also serve as the primary key.
7. A semantic or natural key is a key for which the possible values have an obvious meaning to the user or the data. For example, a semantic primary key for a COUNTRY entity might contain the value 'USA' for the occurrence describing the United States of America. The value 'USA' has meaning to the user.
8. A technical or surrogate or artificial key is a key for which the possible values have no obvious meaning to the user or the data. These are used instead of semantic keys for any of the following reasons:
o When the value in a semantic key is likely to be changed by the user, or can have duplicates. For example, on a PERSON table it is unwise to use PERSON_NAME as the key as it is possible to have more than one person with the same name, or the name may change such as through marriage.
o When none of the existing attributes can be used to guarantee uniqueness. In this case adding an attribute whose value is generated by the system, e.g from a sequence of numbers, is the only way to provide a unique value. Typical examples would be ORDER_ID and INVOICE_ID. The value '12345' has no meaning to the user as it conveys nothing about the entity to which it relates.
9. A key functionally determines the other attributes in the row, thus it is always a determinant.
10. Note that the term 'key' in most DBMS engines is implemented as an index which does not allow duplicate entries.
Relationships
One table (relation) may be linked with another in what is known as a relationship. Relationships may be built into the database structure to facilitate the operation of relational joins at runtime.
1. A relationship is between two tables in what is known as a one-to-many or parent-child or master-detail relationship where an occurrence on the 'one' or 'parent' or 'master' table may have any number of associated occurrences on the 'many' or 'child' or 'detail' table. To achieve this the child table must contain fields which link back the primary key on the parent table. These fields on the child table are known as a foreign key, and the parent table is referred to as the foreign table (from the viewpoint of the child).
2. It is possible for a record on the parent table to exist without corresponding records on the child table, but it should not be possible for an entry on the child table to exist without a corresponding entry on the parent table.
3. A child record without a corresponding parent record is known as an orphan.
4. It is possible for a table to be related to itself. For this to be possible it needs a foreign key which points back to the primary key. Note that these two keys cannot be comprised of exactly the same fields otherwise the record could only ever point to itself.
5. A table may be the subject of any number of relationships, and it may be the parent in some and the child in others.
6. Some database engines allow a parent table to be linked via a candidate key, but if this were changed it could result in the link to the child table being broken.
7. Some database engines allow relationships to be managed by rules known as referential integrity or foreign key restraints. These will prevent entries on child tables from being created if the foreign key does not exist on the parent table, or will deal with entries on child tables when the entry on the parent table is updated or deleted.
Determinant and Dependent
The terms determinant and dependent can be described as follows:
1. The expression X Y means 'if I know the value of X, then I can obtain the value of Y' (in a table or somewhere).
2. In the expression X Y, X is the determinant and Y is the dependent attribute.
3. The value X determines the value of Y.
4. The value Y depends on the value of X.
Functional Dependencies (FD)
A functional dependency can be described as follows:
1. An attribute is functionally dependent if its value is determined by another attribute which is a key.
2. That is, if we know the value of one (or several) data items, then we can find the value of another (or several).
3. Functional dependencies are expressed as X Y, where X is the determinant and Y is the functionally dependent attribute.
4. If A (B,C) then A B and A C.
5. If (A,B) C, then it is not necessarily true that A C and B C.
6. If A B and B A, then A and B are in a 1-1 relationship.
7. If A B then for A there can only ever be one value for B.
Transitive Dependencies (TD)
A transitive dependency can be described as follows:
1. An attribute is transitively dependent if its value is determined by another attribute which is not a key.
2. If X Y and X is not a key then this is a transitive dependency.
3. A transitive dependency exists when A B C but NOT A C.
Multi-Valued Dependencies (MVD)A multi-valued dependency can be
described as follows:
1. A table involves a multi-valued dependency if it may contain multiple values for an entity.
2. A multi-valued dependency may arise as a result of enforcing 1st normal form.
3. X Y, ie X multi-determines Y, when for each value of X we can have more than one value of Y.
4. If A B and A C then we have a single attribute A which multi-determines two other independent attributes, B and C.
5. If A (B,C) then we have an attribute A which multi-determines a set of associated attributes, B and C.
Types of Relational Join
A JOIN is a method of creating a result set that combines rows from two or more tables (relations). When comparing the contents of two tables the following conditions may occur:
• Every row in one relation has a match in the other relation.
• Relation R1 contains rows that have no match in relation R2.
• Relation R2 contains rows that have no match in relation R1.
INNER joins contain only matches. OUTER joins may contain mismatches as well.
Inner Join
This is sometimes known as a simple join. It returns all rows from both tables where there is a match. If there are rows in R1 which do not have matches in R2, those rows will not be listed. There are two possible ways of specifying this type of join:
SELECT * FROM R1, R2 WHERE R1.r1_field = R2.r2_field;
SELECT * FROM R1 INNER JOIN R2 ON R1.field = R2.r2_field
If the fields to be matched have the same names in both tables then the ON condition, as in:
ON R1.fieldname = R2.fieldname
ON (R1.field1 = R2.field1 AND R1.field2 = R2.field2)
can be replaced by the shorter USING condition, as in:
USING fieldname
USING (field1, field2)
Natural Join
A natural join is based on all columns in the two tables that have the same name. It is semantically equivalent to an INNER JOIN or a LEFT JOIN with a USING clause that names all columns that exist in both tables.
SELECT * FROM R1 NATURAL JOIN R2
The alternative is a keyed join which includes an ON or USING condition.
Left [Outer] Join
Returns all the rows from R1 even if there are no matches in R2. If there are no matches in R2 then the R2 values will be shown as null.
SELECT * FROM R1 LEFT [OUTER] JOIN R2 ON R1.field = R2.field
Right [Outer] Join
Returns all the rows from R2 even if there are no matches in R1. If there are no matches in R1 then the R1 values will be shown as null.
SELECT * FROM R1 RIGHT [OUTER] JOIN R2 ON R1.field = R2.field
Full [Outer] Join
Returns all the rows from both tables even if there are no matches in one of the tables. If there are no matches in one of the tables then its values will be shown as null.
SELECT * FROM R1 FULL [OUTER] JOIN R2 ON R1.field = R2.field
Self Join
This joins a table to itself. This table appears twice in the FROM clause and is followed by table aliases that qualify column names in the join condition.
SELECT a.field1, b.field2 FROM R1 a, R1 b WHERE a.field = b.field
Cross Join
This type of join is rarely used as it does not have a join condition, so every row of R1 is joined to every row of R2. For example, if both tables contain 100 rows the result will be 10,000 rows. This is sometimes known as a cartesian product and can be specified in either one of the following ways:
SELECT * FROM R1 CROSS JOIN R2
SELECT * FROM R1, R2
Wednesday, January 13, 2010
FIFO Dynamics
FIFO Dynamics
As you recall, the FIFO passes the data from the producer to the consumer. In general, the rates at which data are produced and consumed can vary dynamically. Humans do not enter data into a keyboard at a constant rate.
Even printers require more time to print color graphics versus black and white text. Let tp be the time (in sec) between calls to PutFifo, and rp be the arrival rate (producer rate in bytes/sec) into the system. Similarly, let tg be the time (in sec) between calls to GetFifo, and rg be the service rate (consumerrate in bytes/sec) out of the system.
rg=1/tg
rp=1/tp
If the minimum time between calls to PutFifo is greater than the maximum time between calls to GetFifo,
min tp > max tg
the other hand, if the time between calls to PutFifo becomes less than the time between calls to GetFifo because either
• the arrival rate temporarily increases
• the service rate temporarily decreases
then information will be collected in the FIFO. For example, a person might type very fast for a while followed by long pause. The FIFO could be used to capture without loss all the data as it comes in very fast. Clearly on average the system must be able to process the data (the consumer thread) at least as fast as the average rate at which the data arrives (producer thread). If the average producer rate is larger than the average consumer rate
rp > rg
then the FIFO will eventually overflow no matter how large the FIFO. If the producer rate is temporarily high, and that causes the FIFO to become full, then this problem can be solved by increasing the FIFO size.
There is fundamental difference between an empty error and a full error. Consider the application of using a FIFO between your computer and its printer. This is a good idea because the computer can temporarily generate data to be printed at a very high rate followed by long pauses. The printer is like a
turtle. It can print at a slow but steady rate (e.g., 10 characters/sec.) The computer will put a byte into the FIFO that it wants printed. The printer will get a byte out of the FIFO when it is ready to print another character. A full error occurs when the computer calls PutFifo at too fast a rate. A full error is serious,
because if ignored data will be lost. On the other hand, an empty error occurs when the printer is ready to print but the computer has nothing in mind. An empty error is not serious, because in this case the printer just sits there doing nothing.
As you recall, the FIFO passes the data from the producer to the consumer. In general, the rates at which data are produced and consumed can vary dynamically. Humans do not enter data into a keyboard at a constant rate.
Even printers require more time to print color graphics versus black and white text. Let tp be the time (in sec) between calls to PutFifo, and rp be the arrival rate (producer rate in bytes/sec) into the system. Similarly, let tg be the time (in sec) between calls to GetFifo, and rg be the service rate (consumerrate in bytes/sec) out of the system.
rg=1/tg
rp=1/tp
If the minimum time between calls to PutFifo is greater than the maximum time between calls to GetFifo,
min tp > max tg
the other hand, if the time between calls to PutFifo becomes less than the time between calls to GetFifo because either
• the arrival rate temporarily increases
• the service rate temporarily decreases
then information will be collected in the FIFO. For example, a person might type very fast for a while followed by long pause. The FIFO could be used to capture without loss all the data as it comes in very fast. Clearly on average the system must be able to process the data (the consumer thread) at least as fast as the average rate at which the data arrives (producer thread). If the average producer rate is larger than the average consumer rate
rp > rg
then the FIFO will eventually overflow no matter how large the FIFO. If the producer rate is temporarily high, and that causes the FIFO to become full, then this problem can be solved by increasing the FIFO size.
There is fundamental difference between an empty error and a full error. Consider the application of using a FIFO between your computer and its printer. This is a good idea because the computer can temporarily generate data to be printed at a very high rate followed by long pauses. The printer is like a
turtle. It can print at a slow but steady rate (e.g., 10 characters/sec.) The computer will put a byte into the FIFO that it wants printed. The printer will get a byte out of the FIFO when it is ready to print another character. A full error occurs when the computer calls PutFifo at too fast a rate. A full error is serious,
because if ignored data will be lost. On the other hand, an empty error occurs when the printer is ready to print but the computer has nothing in mind. An empty error is not serious, because in this case the printer just sits there doing nothing.
Two pointer/counter FIFO implementation
Two pointer/counter FIFO implementation
The other method to determine if a FIFO is empty or full is to implement a counter. In the following code, Size contains the number of bytes currently stored in the FIFO.
The advantage of implementing the counter is that FIFO 1/4 full and 3/4 full conditions are easier to implement. If you were studying the behavior of a system it might be informative to measure the current Size as a function of time.
/* Pointer,counter implementation of the FIFO */
#define FifoSize 10 /* Number of 8 bit data in the Fifo */
char *PutPt; /* Pointer of where to put next */
char *GetPt; /* Pointer of where to get next */
unsigned char Size; /* Number of elements currently in the FIFO */
/* FIFO is empty if Size=0 */
/* FIFO is full if Size=FifoSize */
char Fifo[FifoSize]; /* The statically allocated fifo data */
void InitFifo(void) { char SaveSP;
asm(" tpa\n staa %SaveSP\n sei"); /* make atomic, entering critical*/
PutPt=GetPt=&Fifo[0]; /* Empty when Size==0 */
Size=0;
asm(" ldaa %SaveSP\n tap"); /* end critical section */
}
int PutFifo (char data) { char SaveSP;
if (Size == FifoSize ) {
return(0);} /* Failed, fifo was full */
else{
asm(" tpa\n staa %SaveSP\n sei"); /* make atomic, entering critical*/
Size++;
*(PutPt++)=data; /* put data into fifo */
if (PutPt == &Fifo[FifoSize]) PutPt = &Fifo[0]; /* Wrap */
asm(" ldaa %SaveSP\n tap"); /* end critical section */
return(-1); /* Successful */
}
}
int GetFifo (char *datapt) { char SaveSP;
if (Size == 0 ){
return(0);} /* Empty if Size=0 */
else{
asm(" tpa\n staa %SaveSP\n sei"); /* make atomic, entering critical*/
*datapt=*(GetPt++); Size--;
if (GetPt == &Fifo[FifoSize]) GetPt = &Fifo[0];
asm(" ldaa %SaveSP\n tap"); /* end critical section */
return(-1); }
}
Program 5.18. C language routines to implement a two pointer with counter FIFO.
To check for FIFO full, the above PutFifo routine simply compares Size to the maximum allowed
value. If the FIFO is already full then the routine is exited without saving the data. With this
implementation a FIFO with 10 allocated bytes can actually hold 10 data points.
To check for FIFO empty, the following GetFifo routine simply checks to see if Size equals 0.
If Size is zero at the start of the routine, then GetFifo returns with the "empty" condition signified.
The other method to determine if a FIFO is empty or full is to implement a counter. In the following code, Size contains the number of bytes currently stored in the FIFO.
The advantage of implementing the counter is that FIFO 1/4 full and 3/4 full conditions are easier to implement. If you were studying the behavior of a system it might be informative to measure the current Size as a function of time.
/* Pointer,counter implementation of the FIFO */
#define FifoSize 10 /* Number of 8 bit data in the Fifo */
char *PutPt; /* Pointer of where to put next */
char *GetPt; /* Pointer of where to get next */
unsigned char Size; /* Number of elements currently in the FIFO */
/* FIFO is empty if Size=0 */
/* FIFO is full if Size=FifoSize */
char Fifo[FifoSize]; /* The statically allocated fifo data */
void InitFifo(void) { char SaveSP;
asm(" tpa\n staa %SaveSP\n sei"); /* make atomic, entering critical*/
PutPt=GetPt=&Fifo[0]; /* Empty when Size==0 */
Size=0;
asm(" ldaa %SaveSP\n tap"); /* end critical section */
}
int PutFifo (char data) { char SaveSP;
if (Size == FifoSize ) {
return(0);} /* Failed, fifo was full */
else{
asm(" tpa\n staa %SaveSP\n sei"); /* make atomic, entering critical*/
Size++;
*(PutPt++)=data; /* put data into fifo */
if (PutPt == &Fifo[FifoSize]) PutPt = &Fifo[0]; /* Wrap */
asm(" ldaa %SaveSP\n tap"); /* end critical section */
return(-1); /* Successful */
}
}
int GetFifo (char *datapt) { char SaveSP;
if (Size == 0 ){
return(0);} /* Empty if Size=0 */
else{
asm(" tpa\n staa %SaveSP\n sei"); /* make atomic, entering critical*/
*datapt=*(GetPt++); Size--;
if (GetPt == &Fifo[FifoSize]) GetPt = &Fifo[0];
asm(" ldaa %SaveSP\n tap"); /* end critical section */
return(-1); }
}
Program 5.18. C language routines to implement a two pointer with counter FIFO.
To check for FIFO full, the above PutFifo routine simply compares Size to the maximum allowed
value. If the FIFO is already full then the routine is exited without saving the data. With this
implementation a FIFO with 10 allocated bytes can actually hold 10 data points.
To check for FIFO empty, the following GetFifo routine simply checks to see if Size equals 0.
If Size is zero at the start of the routine, then GetFifo returns with the "empty" condition signified.
First In First Out Queue
Introduction to FIFO’s
As we saw earlier, the first in first out circular queue (FIFO) is quite useful for implementing a buffered I/O interface. It can be used for both buffered input and buffered output. The order preserving data structure temporarily saves data created by the source (producer) before it is processed by the sink (consumer). The class of FIFO’s studied in this section will be statically allocated global structures.
Because they are global variables, it means they will exist permanently and can be carefully shared by more than one program. The advantage of using a FIFO structure for a data flow problem is that we can decouple the producer and consumer threads. Without the FIFO we would have to produce 1 piece of data, then
process it, produce another piece of data, then process it. With the FIFO, the producer thread can continue to produce data without having to wait for the consumer to finish processing the previous data. This decoupling can significantly improve system performance.
You have probably already experienced the convenience of FIFO’s. For example, you can continue to type another commands into the DOS command interpreter while it is still processing a previous command. The ASCII codes are put (calls PutFifo) in a FIFO whenever you hit the key. When the DOS command interpreter is free it calls GetFifo for more keyboard data to process. A FIFO is also used
when you ask the computer to print a file. Rather than waiting for the actual printing to occur character by character, the print command will PUT the data in a FIFO. Whenever the printer is free, it will GET data from the FIFO. The advantage of the FIFO is it allows you to continue to use your computer while the printing occurs in the background. To implement this magic of background printing we will need interrupts.
There are many producer/consumer applications. In the following table the processes on the left are producers that create or input data, while the processes on the right are consumers which process or output data.
You have probably already experienced the convenience of FIFO’s. For example, you can continue to type another commands into the DOS command interpreter while it is still processing a previous command. The ASCII codes are put (calls PutFifo) in a FIFO whenever you hit the key. When the
DOS command interpreter is free it calls GetFifo for more keyboard data to process. A FIFO is also used when you ask the computer to print a file. Rather than waiting for the actual printing to occur character by character, the print command will PUT the data in a FIFO. Whenever the printer is free, it will GET data from the FIFO. The advantage of the FIFO is it allows you to continue to use your computer while the
printing occurs in the background. To implement this magic of background printing we will need interrupts.
There are many producer/consumer applications. In the following table the processes on the left are producers that create or input data, while the processes on the right are consumers which process or output data.
As we saw earlier, the first in first out circular queue (FIFO) is quite useful for implementing a buffered I/O interface. It can be used for both buffered input and buffered output. The order preserving data structure temporarily saves data created by the source (producer) before it is processed by the sink (consumer). The class of FIFO’s studied in this section will be statically allocated global structures.
Because they are global variables, it means they will exist permanently and can be carefully shared by more than one program. The advantage of using a FIFO structure for a data flow problem is that we can decouple the producer and consumer threads. Without the FIFO we would have to produce 1 piece of data, then
process it, produce another piece of data, then process it. With the FIFO, the producer thread can continue to produce data without having to wait for the consumer to finish processing the previous data. This decoupling can significantly improve system performance.
You have probably already experienced the convenience of FIFO’s. For example, you can continue to type another commands into the DOS command interpreter while it is still processing a previous command. The ASCII codes are put (calls PutFifo) in a FIFO whenever you hit the key. When the DOS command interpreter is free it calls GetFifo for more keyboard data to process. A FIFO is also used
when you ask the computer to print a file. Rather than waiting for the actual printing to occur character by character, the print command will PUT the data in a FIFO. Whenever the printer is free, it will GET data from the FIFO. The advantage of the FIFO is it allows you to continue to use your computer while the printing occurs in the background. To implement this magic of background printing we will need interrupts.
There are many producer/consumer applications. In the following table the processes on the left are producers that create or input data, while the processes on the right are consumers which process or output data.
You have probably already experienced the convenience of FIFO’s. For example, you can continue to type another commands into the DOS command interpreter while it is still processing a previous command. The ASCII codes are put (calls PutFifo) in a FIFO whenever you hit the key. When the
DOS command interpreter is free it calls GetFifo for more keyboard data to process. A FIFO is also used when you ask the computer to print a file. Rather than waiting for the actual printing to occur character by character, the print command will PUT the data in a FIFO. Whenever the printer is free, it will GET data from the FIFO. The advantage of the FIFO is it allows you to continue to use your computer while the
printing occurs in the background. To implement this magic of background printing we will need interrupts.
There are many producer/consumer applications. In the following table the processes on the left are producers that create or input data, while the processes on the right are consumers which process or output data.
When to use interrupts
When to use interrupts
The following factors should be considered when deciding the most appropriate mechanism to synchronize hardware and software. One should not always use gadfly because one is too lazy tO implement the complexities of interrupts. On the other hand, one should not always use interrupts because they are fun and exciting.
isition&control
The following factors should be considered when deciding the most appropriate mechanism to synchronize hardware and software. One should not always use gadfly because one is too lazy tO implement the complexities of interrupts. On the other hand, one should not always use interrupts because they are fun and exciting.
isition&control
Interrupt Service Routines
The interrupt service routine (ISR) is the software module that is executed when the hardware requests an interrupt. From the last section, we see that there may be one large ISR that handles all requests (polled interrupts), or many small ISR's specific for each potential source of interrupt (vectored interrupts). The design of the interrupt service routine requires careful consideration of many factors that will be discussed in this chapter. When an interrupt is requested (and the device is armed and the I bit is one), the microcomputer will service an interrupt:
1) the execution of the main program is suspended (the current instruction is finished),
2) the interrupt service routine, or background thread is executed,
3) the main program is resumed when the interrupt service routine executes iret .
When the microcomputer accepts an interrupt request, it will automatically save the execution state of the main thread by pushing all its registers on the stack. After the ISR provides the necessary service it will execute a iret instruction. This instruction pulls the registers from the stack, which returns control to the main program. Execution of the main program will then continue with the exact stack and register values that existed before the interrupt. Although interrupt handlers can allocate, access then deallocate local variables, parameter passing between threads must be implemented using global memory variables. Global variables are also equired if an interrupt thread wishes to pass information to itself, e.g., from one interrupt instance to another. The execution of the main program is called the foreground thread, and the executions of interrupt service routines are called background threads.
Interrupt definition
Interrupt definition
An interrupt is the automatic transfer of software execution in response to hardware that is asynchronous with the current software execution. The hardware can either be an external I/O device (like a keyboard or printer) or an internal event (like an op code fault, or a periodic timer.) When the hardware needs service (busy to done state transition) it will request an interrupt. A thread is defined as the path of
action of software as it executes. The execution of the interrupt service routine is called a background thread. This thread is created by the hardware interrupt request and is killed when the interrupt service routine executes the iret instruction. A new thread is created for each interrupt request. It is important to consider each individual request as a separate thread because local variables and registers used in the interrupt service routine are unique and separate from one interrupt event to the next. In a multithreaded system we consider the threads as cooperating to perform an overall task. Consequently we will develop ways for the threads to communicat and synchronize with each other. Most embedded systems have a single common overall goal. On the other hand general-purpose computers can have multiple unrelated functions to perform. A process is also defined as the action of software as it executes. The difference is processes do not necessarily cooperate towards a common shared goal. The software has dynamic control over aspects of the interrupt request sequence. First, each potential interrupt source has a separate arm bit that the software can activate or deactivate. The software will set the arm bits for those devices it wishes to accept interrupts from, and will deactivate the arm bits
within those devices from which interrupts are not to be allowed. In other words it uses the arm bits to individually select which devices will and which devices will not request interrupts. The second aspect that the software controls is the interrupt enable bit, I, which is in the status register (SR). The software can
enable all armed interrupts by setting I=1 (sti), or it can disable all interrupts by setting I=0 (cli). The disabled interrupt state (I=0) does not dismiss the interrupt requests, rather it postpones them until a later time, when the software deems it convenient to handle the requests. We will pay special attention to these
enable/disable software actions. In particular we will need to disable interrupts when executing nonreentrant code but disabling interrupts will have the effect of increasing the response time of software. There are two general methods with which we configure external hardware so that it can request an interrupt. The first method is a shared negative logic level-active request like IRQ . All the devices
that need to request interrupts have an open collector negative logic interrupt request line. The hardware requests service by pulling the interrupt request IRQ line low. The line over the IRQ signifies negative logic. In other words, an interrupt is requested when IRQ is zero. Because the request lines are open
collector, a pull up resistor is needed to make IRQ high when no devices need service.
Normally these interrupt requests share the same interrupt vector. This means whichever device requests an
interrupt, the same interrupt service routine is executed. Therefore the interrupt service routine must first
determine which device requested the interrupt.
The original IBM-PC had only 8 dedicated edge-triggered interrupt lines, and the current PC I/O bus only has 15. This small number can be a serious limitation in a computer system with many I/O devices.
Observation: Microcomputer systems running in expanded mode often use shared negative logic
level-active interrupts for their external I/O devices.
Observation: Microcomputer systems running in single chip mode often use dedicated edgetriggered
interrupts for their I/O devices.
Observation: The number of interrupting devices on a system using dedicated edge-triggered
interrupts is limited when compared to a system using shared negative logic level-active interrupts.
Observation: Most Motorola microcomputers support both shared negative logic and dedicated edge-triggered interrupts.
An interrupt is the automatic transfer of software execution in response to hardware that is asynchronous with the current software execution. The hardware can either be an external I/O device (like a keyboard or printer) or an internal event (like an op code fault, or a periodic timer.) When the hardware needs service (busy to done state transition) it will request an interrupt. A thread is defined as the path of
action of software as it executes. The execution of the interrupt service routine is called a background thread. This thread is created by the hardware interrupt request and is killed when the interrupt service routine executes the iret instruction. A new thread is created for each interrupt request. It is important to consider each individual request as a separate thread because local variables and registers used in the interrupt service routine are unique and separate from one interrupt event to the next. In a multithreaded system we consider the threads as cooperating to perform an overall task. Consequently we will develop ways for the threads to communicat and synchronize with each other. Most embedded systems have a single common overall goal. On the other hand general-purpose computers can have multiple unrelated functions to perform. A process is also defined as the action of software as it executes. The difference is processes do not necessarily cooperate towards a common shared goal. The software has dynamic control over aspects of the interrupt request sequence. First, each potential interrupt source has a separate arm bit that the software can activate or deactivate. The software will set the arm bits for those devices it wishes to accept interrupts from, and will deactivate the arm bits
within those devices from which interrupts are not to be allowed. In other words it uses the arm bits to individually select which devices will and which devices will not request interrupts. The second aspect that the software controls is the interrupt enable bit, I, which is in the status register (SR). The software can
enable all armed interrupts by setting I=1 (sti), or it can disable all interrupts by setting I=0 (cli). The disabled interrupt state (I=0) does not dismiss the interrupt requests, rather it postpones them until a later time, when the software deems it convenient to handle the requests. We will pay special attention to these
enable/disable software actions. In particular we will need to disable interrupts when executing nonreentrant code but disabling interrupts will have the effect of increasing the response time of software. There are two general methods with which we configure external hardware so that it can request an interrupt. The first method is a shared negative logic level-active request like IRQ . All the devices
that need to request interrupts have an open collector negative logic interrupt request line. The hardware requests service by pulling the interrupt request IRQ line low. The line over the IRQ signifies negative logic. In other words, an interrupt is requested when IRQ is zero. Because the request lines are open
collector, a pull up resistor is needed to make IRQ high when no devices need service.
Normally these interrupt requests share the same interrupt vector. This means whichever device requests an
interrupt, the same interrupt service routine is executed. Therefore the interrupt service routine must first
determine which device requested the interrupt.
The original IBM-PC had only 8 dedicated edge-triggered interrupt lines, and the current PC I/O bus only has 15. This small number can be a serious limitation in a computer system with many I/O devices.
Observation: Microcomputer systems running in expanded mode often use shared negative logic
level-active interrupts for their external I/O devices.
Observation: Microcomputer systems running in single chip mode often use dedicated edgetriggered
interrupts for their I/O devices.
Observation: The number of interrupting devices on a system using dedicated edge-triggered
interrupts is limited when compared to a system using shared negative logic level-active interrupts.
Observation: Most Motorola microcomputers support both shared negative logic and dedicated edge-triggered interrupts.
Monday, January 11, 2010
Parallel Computer Architecture
Parallel computer architectures are now going to real applications! This fact is demonstrated by the large number of application areas covered in this book (see section on applications of parallel computer architectures). The applications range from image analysis to quantum mechanics and databases. Still, the use of parallel architectures poses serious problems and requires the development of new techniques and tools. This book is a collection of best papers presented at the first workshop on two major research activities at the Universitat Erlangen-N/imberg and Technische Universitat Munchen. At both universities, more than 100 resarchers are working in the field of multiprocessor systems and network configurations and methods and tools for parallel systems. Indeed, the German Science Foundation (Deutsche Forschungsgemeinschaft ) has been sponsoring the projects under grant numbers SFB 182 and SFB 342. Research grants in the form of a Sonderforschungsbereich are given to selected German Universities in portions of three years following a thoroughful reviewing process. The overall duration of such a research grant is restricted to 12 years. The initiative at Erlangen-Nurnberg was started in 1987 and has been headed since this time by Prof. Dr. H. Wedekind. Work at TU-Miinchen began in 1990, head of this initiative is Prof. Dr A. Bode. The authors of this book are grateful to the Deutsche Forschungsgemeinsehaft for its continuing support in the field of research on parallel processing. The first section of the book is devoted to hardware apects of parallel systems. Here, a number of basic problems have to be solved. Latency and bandwidths of interconnection networks are a bottleneck for parallel process communicatlon. Optoelectronic media, discussed in this section, could change this fact. The sealabillty of parallel hardware is demonstrated with the multiprocessor system MEMSY based on the concept of distributed shared memory. Scalable parallel systems need fault tolerance mechanisms to garantee reliable system behaviour even in the presence of defects in parts of the system. An approach to fault tolerance for scalable parallel systems is discussed in this section. The next section is devoted to performance aspects of parallel systems. Analytical models for performance prediction are presented as well as a new hardware monitor system together with the evaluation software. Tools for the automatic parallelization of existing applications are a dream, but not yet reality for the user of parallel systems. Different aspects for automatic treatment of parallel apphcations are covered in the next section on architectures and tools for paral]elizatlon. Dynamic lead balancing is an application transparent mechanism of the operating system to guarantee equal lead on the elements of a multiprocessor system. Randomizod shared memory is one possible implementation of a virtual shared memory based on distributed memory hardware.
Interface ActionListener
Interface ActionListener.
public interface ActionListener
The ActionListener interface is an addition to the Portlet interface. If an object wishes to receive action events in the portlet, this interface has to be implemented additionally to the Portlet interface.
public interface ActionEvent
extends Event
An ActionEvent is sent by the portlet container when an HTTP request is received that is associated with an action.
static int ACTION_PERFORMED
Event identifier indicating that portlet request has been received that one or more actions associated with it.
PortletAction getAction()
Deprecated. Use getActionString() instead
java.lang.String getActionString()
Returns the action string that this action event carries.
ACTION_PERFORMED
public static final int ACTION_PERFORMED
Event identifier indicating that portlet request has been received that one or more actions associated with it. Each action will result in a separate event being fired.
An event with this id is fired when an action has to be performe
void actionPerformed(ActionEvent event)
Notifies this listener that the action which the listener is watching for has been performed
Method Detail
actionPerformed
public void actionPerformed(ActionEvent event)
throws PortletException
Notifies this listener that the action which the listener is watching for has been performed.
Parameters:
event - the action event
Throws:
PortletException - if the listener has trouble fulfilling the request
public interface ActionListener
The ActionListener interface is an addition to the Portlet interface. If an object wishes to receive action events in the portlet, this interface has to be implemented additionally to the Portlet interface.
public interface ActionEvent
extends Event
An ActionEvent is sent by the portlet container when an HTTP request is received that is associated with an action.
static int ACTION_PERFORMED
Event identifier indicating that portlet request has been received that one or more actions associated with it.
PortletAction getAction()
Deprecated. Use getActionString() instead
java.lang.String getActionString()
Returns the action string that this action event carries.
ACTION_PERFORMED
public static final int ACTION_PERFORMED
Event identifier indicating that portlet request has been received that one or more actions associated with it. Each action will result in a separate event being fired.
An event with this id is fired when an action has to be performe
void actionPerformed(ActionEvent event)
Notifies this listener that the action which the listener is watching for has been performed
Method Detail
actionPerformed
public void actionPerformed(ActionEvent event)
throws PortletException
Notifies this listener that the action which the listener is watching for has been performed.
Parameters:
event - the action event
Throws:
PortletException - if the listener has trouble fulfilling the request
INTRO OF Data representation &Immutability IN JAVA
Data representation changes in scientific applications. Simple example: represent a point using Cartesian or polar coordinates. Polynomials (coefficents vs. point-value), matrices (sparse vs. dense).
Immutability. An immutable data type is a data type such that the value of an object never changes once constructed. Examples: Complex and String. When you pass a String to a method, you don't have to worry about that method changing the sequence of characters in the String. On the other hand, when you pass an array to a method, the method is free to change the elements of the array.
Immutable data types have numerous advantages. they are easier to use, harder to misuse, easier to debug code that uses immutable types, easier to guarantee that the class variables remain in a consistent state (since they never change after construction), no need for copy constructor, are thread-safe, work well as keys in symbol table, don't need to be defensively copied when used as an instance variable in another class. Disadvantage: separate object for each value.
Josh Block, a Java API architect, advises that "Classes should be immutable unless there's a very good reason to make them mutable....If a class cannot be made immutable, you should still limit its mutability as much as possible."
Give example where function changes value of some Complex object, which leaves the invoking function with a variable whose value it cannot rely upon.
mutable immutable
------------------------------------
Counter Complex
MovingCharge Charge
Draw String
array Vector
java.util.Date primitive types
Picture wrapper types
Final. Java provides language support to enforce immutability. When you declare a variable to be final, you are promising to assign it a value only once, either in an initializer or in the constructor. It is a compile-time error to modify the value of a final variable. public class Complex {
private final double re;
private final double im;
public Complex(double real, double imag) {
re = real;
im = imag;
}
// compile-time error
public void plus(Complex b) {
re = this.re + b.re; // oops, overwrites invoking object's value
im = this.im + b.re; // compile-time error since re and im are final
return new Complex(re, im);
}
}
It is good style to use the modifier final with instance variables whose values never change.
Serves as documentation that the value does not change.
Prevents accidental changes.
Makes programs easier to debug, since it's easier to keep track of the state: initialized at construction time and never changes.
Mutable instance variables. If the value of a final instance variable is mutable, the value of that instance variable (the reference to an object) will never change - it will always refer to the same object. However, the value of the object itself can change. For example, in Java, arrays are mutable objects: if you have an final instance variable that is an array, you can't change the array (e.g., to change its length), but you can change the individual array elements.
This creates a potential mutable hole in an otherwise immutable data type. For example, the following implementation of a Vector is mutable. public final class Vector {
private final int N;
private final double[] coords;
public Vector(double[] a) {
N = a.length;
coords = a;
}
...
}
A client program can create a Vector by specifying the entries in an array, and then change the elements of the Vector from (3, 4) to (0, 4) after construction (thereby bypassing the public API). double[] a = { 3.0, 4.0 };
Vector vector = new Vector(a);
StdOut.println(vector.magnitude()); // 5.0
a[0] = 0.0; // bypassing the public API
StdOut.println(vector.magnitude()); // 4.0
Defensive copy. To guarantee immutability of a data type that includes an instance variable of a mutable type, we perform a defensive copy. By creating a local copy of the array, we ensure that any change the client makes to the original array has no effect on the object. public final class Vector {
private final int N;
private final double[] coords;
public Vector(double[] a) {
N = a.length;
// defensive copy
coords = new double[N];
for (int i = 0; i < N; i++) {
coords[i] = a[i];
}
}
...
}
Program Vector.java encapsulates an immutable array.
Global constants. The final modifier is also widely used to specify local or global constants. For example, the following appears in Java's Math library. public static final double E = 2.7182818284590452354;
public static final double PI = 3.14159265358979323846;
If the variables were declared public, a client could wreak havoc by re-assigning Math.PI = 1.0; Since Math.PI is declared to be private, such an attempt would be flagged as a compile-time error.
Immutability. An immutable data type is a data type such that the value of an object never changes once constructed. Examples: Complex and String. When you pass a String to a method, you don't have to worry about that method changing the sequence of characters in the String. On the other hand, when you pass an array to a method, the method is free to change the elements of the array.
Immutable data types have numerous advantages. they are easier to use, harder to misuse, easier to debug code that uses immutable types, easier to guarantee that the class variables remain in a consistent state (since they never change after construction), no need for copy constructor, are thread-safe, work well as keys in symbol table, don't need to be defensively copied when used as an instance variable in another class. Disadvantage: separate object for each value.
Josh Block, a Java API architect, advises that "Classes should be immutable unless there's a very good reason to make them mutable....If a class cannot be made immutable, you should still limit its mutability as much as possible."
Give example where function changes value of some Complex object, which leaves the invoking function with a variable whose value it cannot rely upon.
mutable immutable
------------------------------------
Counter Complex
MovingCharge Charge
Draw String
array Vector
java.util.Date primitive types
Picture wrapper types
Final. Java provides language support to enforce immutability. When you declare a variable to be final, you are promising to assign it a value only once, either in an initializer or in the constructor. It is a compile-time error to modify the value of a final variable. public class Complex {
private final double re;
private final double im;
public Complex(double real, double imag) {
re = real;
im = imag;
}
// compile-time error
public void plus(Complex b) {
re = this.re + b.re; // oops, overwrites invoking object's value
im = this.im + b.re; // compile-time error since re and im are final
return new Complex(re, im);
}
}
It is good style to use the modifier final with instance variables whose values never change.
Serves as documentation that the value does not change.
Prevents accidental changes.
Makes programs easier to debug, since it's easier to keep track of the state: initialized at construction time and never changes.
Mutable instance variables. If the value of a final instance variable is mutable, the value of that instance variable (the reference to an object) will never change - it will always refer to the same object. However, the value of the object itself can change. For example, in Java, arrays are mutable objects: if you have an final instance variable that is an array, you can't change the array (e.g., to change its length), but you can change the individual array elements.
This creates a potential mutable hole in an otherwise immutable data type. For example, the following implementation of a Vector is mutable. public final class Vector {
private final int N;
private final double[] coords;
public Vector(double[] a) {
N = a.length;
coords = a;
}
...
}
A client program can create a Vector by specifying the entries in an array, and then change the elements of the Vector from (3, 4) to (0, 4) after construction (thereby bypassing the public API). double[] a = { 3.0, 4.0 };
Vector vector = new Vector(a);
StdOut.println(vector.magnitude()); // 5.0
a[0] = 0.0; // bypassing the public API
StdOut.println(vector.magnitude()); // 4.0
Defensive copy. To guarantee immutability of a data type that includes an instance variable of a mutable type, we perform a defensive copy. By creating a local copy of the array, we ensure that any change the client makes to the original array has no effect on the object. public final class Vector {
private final int N;
private final double[] coords;
public Vector(double[] a) {
N = a.length;
// defensive copy
coords = new double[N];
for (int i = 0; i < N; i++) {
coords[i] = a[i];
}
}
...
}
Program Vector.java encapsulates an immutable array.
Global constants. The final modifier is also widely used to specify local or global constants. For example, the following appears in Java's Math library. public static final double E = 2.7182818284590452354;
public static final double PI = 3.14159265358979323846;
If the variables were declared public, a client could wreak havoc by re-assigning Math.PI = 1.0; Since Math.PI is declared to be private, such an attempt would be flagged as a compile-time error.
defination of Encapsulation in Java ..Access control.,Getters and setters.
Encapsulation in Java. Java provides language support for information hiding. When we declare an instance variable (or method) as private, this means that the client (code written in another module) cannot directly access that instance variable (or method). The client can only access the API through the public methods and constructors. Programmer can modify the implementation of private methods (or use different instance variables) with the comfort that no client will be directly affected.
Program Counter.java implements a counter, e.g., for an electronic voting machine. It encapsulates a single integer to ensure that it can only get incremented by one at at time and to ensure that it never goes negative. The goal of data abstraction is to restrict which operations you can perform. Can ensure that data type value always remains in a consistent state. Can add logging capability to hit(), e.g., to print timestamp of each vote. In the 2000 presidential election, Al Gore received negative 16,022 votes on an electronic voting machine in Volusia County, Florida. The counter variable was not properly encapsulated in the voting machine software!
Access control. Java provides a mechanism for access control to prevent the use of some variable or method in one part of a program from direct access in another. We have been careful to define all of our instance variables with the private access modifier. This means that they cannot be directly accessed from another class, thereby encapsulating the data type. For this reason, we always use private as the access modifier for our instance variables and recommend that you do the same. If you use public then you will greatly limit any opportunity to modify the class over time. Client programs may rely on your public variable in thousands of places, and you will not be able to remove it without breaking dependent code.
Getters and setters. A data type should not have public instance variables. You should obey this rule not just in letter, but also in spirit. Novice programmers are often tempted to include get() and set() methods for each instance variable, to read and write its value.
Complex a = new Complex(1.0, 2.0);
Complex b = new Complex(3.0, 4.0);
// violates spirit of encapsulation
Complex c = new Complex(0.0, 0.0);
c.setRe(a.re() + b.re());
c.setIm(a.im() + b.im());
// better design
Complex a = new Complex(1.0, 2.0);
Complex b = new Complex(3.0, 4.0);
Complex c = a.plus(b);
The purpose of encapsulation is not just to hide the data, but to hide design decisions which are subject to change. In other words, the client should tell an object what to do, rather than asking an object about its state (get()), making a decision, and then telling it how to do it (set()). Usually it's better design to not have the get() and set() methods. When a get() method is warranted, try to avoid including a set() method
Program Counter.java implements a counter, e.g., for an electronic voting machine. It encapsulates a single integer to ensure that it can only get incremented by one at at time and to ensure that it never goes negative. The goal of data abstraction is to restrict which operations you can perform. Can ensure that data type value always remains in a consistent state. Can add logging capability to hit(), e.g., to print timestamp of each vote. In the 2000 presidential election, Al Gore received negative 16,022 votes on an electronic voting machine in Volusia County, Florida. The counter variable was not properly encapsulated in the voting machine software!
Access control. Java provides a mechanism for access control to prevent the use of some variable or method in one part of a program from direct access in another. We have been careful to define all of our instance variables with the private access modifier. This means that they cannot be directly accessed from another class, thereby encapsulating the data type. For this reason, we always use private as the access modifier for our instance variables and recommend that you do the same. If you use public then you will greatly limit any opportunity to modify the class over time. Client programs may rely on your public variable in thousands of places, and you will not be able to remove it without breaking dependent code.
Getters and setters. A data type should not have public instance variables. You should obey this rule not just in letter, but also in spirit. Novice programmers are often tempted to include get() and set() methods for each instance variable, to read and write its value.
Complex a = new Complex(1.0, 2.0);
Complex b = new Complex(3.0, 4.0);
// violates spirit of encapsulation
Complex c = new Complex(0.0, 0.0);
c.setRe(a.re() + b.re());
c.setIm(a.im() + b.im());
// better design
Complex a = new Complex(1.0, 2.0);
Complex b = new Complex(3.0, 4.0);
Complex c = a.plus(b);
The purpose of encapsulation is not just to hide the data, but to hide design decisions which are subject to change. In other words, the client should tell an object what to do, rather than asking an object about its state (get()), making a decision, and then telling it how to do it (set()). Usually it's better design to not have the get() and set() methods. When a get() method is warranted, try to avoid including a set() method
Designing APIs
Designing APIs. Often the most important and most challenging step in building software is designing the APIs. In many ways, designing good programs is more challenging that writing the code itself. Takes practice, careful deliberation, and many iterations.
Specification problem. Document the API in English. Clearly articulate behavior for all possible inputs, including side effects. "Write to specification." Difficult problem. Many bugs introduced because programmer didn't correctly understand description of API. See booksite for information on automatic documentation using Javadoc.
Wide interfaces. "API should do one thing and do it well." "APIs should be as small as possible, but no smaller." "When in doubt, leave it out." (It's easy to add methods to an existing API, but you can never remove them without breaking existing clients.) APIs with lots of bloat are known as wide interfaces. Supply all necessary operations, but no more. Try to make methods orthogonal in functionality. No need for a method in Complex that adds three complex numbers since there is a method that adds two. The Math library includes methods for sin(), cos(), and tan(), but not sec().
Java libraries tend to have wide interfaces (some designed by pros, some by committee). Sometimes this seems to be the right thing, e.g., String. Although, sometimes you end up with poorly designed APIs that you have to live with forever.
Deprecated methods. Sometimes you end up with deprecated methods that are no longer fully supported, but you still need to keep them or break backward compatibility. Once Java included a method Character.isSpace(), programmers wrote programs that relied on its behavior. Later, they wanted to change the method to support additional Unicode whitespace characters. Can't change the behavior of isSpace() or that would break many programs. Instead, add a new method Character.isWhiteSpace() and "deprecate" the old method. The API is now more confusing than needed.
Almost all methods in java.util.Date are deprecated in favor of java.util.GregorianCalendar.
Backward compatibility. The need for backward compatibility shapes much of the way things are done today (from operating systems to programming languages to ...). [Insert a story.]
Standards. It is easy to understand why writing to an API is so important by considering other domains. Fax machines, radio, MPEG-4, MP3 files, PDF files, HTML, etc. Simpler to use a common standard. Lack of incompatibilities enables business opportunities that would otherwise be impossible. One of the challenges of writing software is making it portable so that it works on a variety of operating systems including Windows, OS X, and Linux. Java Virtual Machine enables portability of Java across platforms.
Specification problem. Document the API in English. Clearly articulate behavior for all possible inputs, including side effects. "Write to specification." Difficult problem. Many bugs introduced because programmer didn't correctly understand description of API. See booksite for information on automatic documentation using Javadoc.
Wide interfaces. "API should do one thing and do it well." "APIs should be as small as possible, but no smaller." "When in doubt, leave it out." (It's easy to add methods to an existing API, but you can never remove them without breaking existing clients.) APIs with lots of bloat are known as wide interfaces. Supply all necessary operations, but no more. Try to make methods orthogonal in functionality. No need for a method in Complex that adds three complex numbers since there is a method that adds two. The Math library includes methods for sin(), cos(), and tan(), but not sec().
Java libraries tend to have wide interfaces (some designed by pros, some by committee). Sometimes this seems to be the right thing, e.g., String. Although, sometimes you end up with poorly designed APIs that you have to live with forever.
Deprecated methods. Sometimes you end up with deprecated methods that are no longer fully supported, but you still need to keep them or break backward compatibility. Once Java included a method Character.isSpace(), programmers wrote programs that relied on its behavior. Later, they wanted to change the method to support additional Unicode whitespace characters. Can't change the behavior of isSpace() or that would break many programs. Instead, add a new method Character.isWhiteSpace() and "deprecate" the old method. The API is now more confusing than needed.
Almost all methods in java.util.Date are deprecated in favor of java.util.GregorianCalendar.
Backward compatibility. The need for backward compatibility shapes much of the way things are done today (from operating systems to programming languages to ...). [Insert a story.]
Standards. It is easy to understand why writing to an API is so important by considering other domains. Fax machines, radio, MPEG-4, MP3 files, PDF files, HTML, etc. Simpler to use a common standard. Lack of incompatibilities enables business opportunities that would otherwise be impossible. One of the challenges of writing software is making it portable so that it works on a variety of operating systems including Windows, OS X, and Linux. Java Virtual Machine enables portability of Java across platforms.
Sunday, January 10, 2010
String processing.
String processing. The program CommentStripper.java reads in a Java (or C++) program from standard input, removes all comments, and prints the result to standard output. This would be useful as part of a Java compiler. It removes /* */ and // style comments using a 5 state finite state automaton. It is meant to illustrate the power of DFAs, but to properly strip Java comments, you would need a few more states to handle extra cases, e.g., quoted string literals like s = "/***//*". The picture below is courtesy of David Eppstein.
DEFINATION & DESCRIPTION OF Finite state automata.
Finite state automata. A deterministic finite state automaton (DFA) is, perhaps, the simplest type of machine that is still interesting to study. Many of its important properties carry over to more complicated machines. So, before we hope to understand these more complicated machines, we first study DFAs. However, it is an enormously useful practical abstraction because DFAs still retain sufficient flexibility to perform interesting tasks, yet the hardware requirements for building them are relatively minimal. DFAs are widely used in text editors for pattern matching, in compilers for lexical analysis, in web browsers for html parsing, and in operating systems for graphical user interfaces. They also serve as the control unit in many physical systems including: vending machines, elevators, automatic traffic signals, and computer microprocessors. Also network protocol stacks and old VCR clocks. They also play a key role in natural language processing and machine learning.
A DFA captures the basic elements of an abstract machine: it reads in a string, and depending on the input and the way the machine was designed, it outputs true or false. A DFA is always is one of N states, which we name 0 through N-1. Each state is labeled true or false. The DFA begins in a distinguished state called the start state. As the input characters are read in one at a time, the DFA changes from one state to another in a prespecified way. The new state is completely determined by the current state and the character just read in. When the input is exhausted, the DFA outputs true or false according to the label of the state it is currently in.
UPPER PICTURE is an example of a DFA that accepts binary strings that are multiples of 3. For example, the machine rejects 1101 since 1101 in binary is 13 in decimal, which is not divisible by 3. On the other hand, the machine accepts 1100 since it is 12 in decimal.
Abstract machines
Abstract machines. Modern computers are capable of performing a wide variety of computations. An abstract machine reads in an input string, and, depending on the input, outputs true (accept), outputs false (reject), or gets stuck in an infinite loop and outputs nothing. We say that a machine recognizes a particular language, if it outputs true for any input string in the language and false otherwise. The artificial restriction to such decision problems is purely for notational convenience. Virtually all computational problems can be recast as language recognition problems. For example, to determine whether an integer 97 is prime, we can ask whether 97 is in the language consisting of all primes {2, 3, 5, 7, 13, ... } or to determine the decimal expansion of the mathematical constant π we can ask whether 7 is the 100th digit of π and so on.
We would like to be able to formally compare different classes of abstract machines in order to address questions like Is a Mac more powerful than a PC? Can Java do more things than C++? To accomplish this, we define a notion of power. We say that machine A is at least as powerful as machine B if machine A can be "programmed'" to recognize all of the languages that B can. Machine A is more powerful than B, if in addition, it can be programmed to recognize at least one additional language. Two machines are equivalent if they can be programmed to recognize precisely the same set of languages. Using this definition of power, we will classify several fundamental machines. Naturally, we are interested in designing the most powerful computer, i.e., the one that can solve the widest range of language recognition problems. Note that our notion of power does not say anything about how fast a computation can be done. Instead, it reflects a more fundamental notion of whether or not it is even possible to perform some computation in a finite number of steps.
We would like to be able to formally compare different classes of abstract machines in order to address questions like Is a Mac more powerful than a PC? Can Java do more things than C++? To accomplish this, we define a notion of power. We say that machine A is at least as powerful as machine B if machine A can be "programmed'" to recognize all of the languages that B can. Machine A is more powerful than B, if in addition, it can be programmed to recognize at least one additional language. Two machines are equivalent if they can be programmed to recognize precisely the same set of languages. Using this definition of power, we will classify several fundamental machines. Naturally, we are interested in designing the most powerful computer, i.e., the one that can solve the widest range of language recognition problems. Note that our notion of power does not say anything about how fast a computation can be done. Instead, it reflects a more fundamental notion of whether or not it is even possible to perform some computation in a finite number of steps.
DEFNATION OF Turing machines
Turing machines are the most general automata. They consist of a finite set of states and an infinite tape which contains the input and is used to read and write symbols during the computation. Since Turing machines can leave symbols on their tape at the end of the computation, they can be viewed as computing functions: the partial recursive functions. Despite the simplicity of these automata, any algorithm that can be implemented on a computer can be modeled by some Turing machine.
Turing machines are used in the characterization of the complexity of problems. The complexity of a problem is determined by the efficiency of the best algorithm that solves it. Measures of an algorithm's efficiency are the amount of time or space that a Turing machine requires to implement the algorithm. A computation's time is the number of configurations involved in that computation, and its space corresponds to the number of positions on its tape that were used.
Turing machines are used in the characterization of the complexity of problems. The complexity of a problem is determined by the efficiency of the best algorithm that solves it. Measures of an algorithm's efficiency are the amount of time or space that a Turing machine requires to implement the algorithm. A computation's time is the number of configurations involved in that computation, and its space corresponds to the number of positions on its tape that were used.
DEFINATION ABOUT Automata Theory
Automata theory is a further step in abstracting your attention away from any
particular kind of computer or particular programming language. In automata theory
we consider a mathematical model of computing. Such a model strips the computational
machinery—the “programming language”—down to the bare minimum, so that it’s easy
to manipulate these theoretical machines (there are several such models, for different purposes, as you’ll soon see) mathematically to prove things about their capabilities.
For the most part, these mathematical models are not used for practical programming
problems. Real programming languages are much more convenient to use. But the very
flexibility that makes real languages easier to use also makes them harder to talk about in a formal way. The stripped-down theoretical machines are designed to be examined
mathematically.
What’s a mathematical model? You’ll see one shortly, called a “finite-state machine.”
The point of this study is that the mathematical models are, in some important ways,
to real computers and real programming languages. What this means is that
any problem that can be solved on a real computer can be solved using these models,and vice versa. Anything we can prove about the models sheds light on the real problems of computer programming as well.
The questions asked in automata theory include these: Are there any problems that
no computer can solve, no matter how much time and memory it has? Is it possible to
PROVE that a particular computer program will actually solve a particular problem? If a computer can use two different external storage devices (disks or tapes) at the same time,does that extend the range of problems it can solve compared to a machine with only one such device?
There is also a larger question lurking in the background of automata theory: Does
the human mind solve problems in the same way that a computer does? Are people
subject to the same limitations as computers? Automata theory does not actually answer this question, but the insights of automata theory can be helpful in trying to work out an answer. We’ll have more to say about this in the chapter on artificial intelligence
particular kind of computer or particular programming language. In automata theory
we consider a mathematical model of computing. Such a model strips the computational
machinery—the “programming language”—down to the bare minimum, so that it’s easy
to manipulate these theoretical machines (there are several such models, for different purposes, as you’ll soon see) mathematically to prove things about their capabilities.
For the most part, these mathematical models are not used for practical programming
problems. Real programming languages are much more convenient to use. But the very
flexibility that makes real languages easier to use also makes them harder to talk about in a formal way. The stripped-down theoretical machines are designed to be examined
mathematically.
What’s a mathematical model? You’ll see one shortly, called a “finite-state machine.”
The point of this study is that the mathematical models are, in some important ways,
to real computers and real programming languages. What this means is that
any problem that can be solved on a real computer can be solved using these models,and vice versa. Anything we can prove about the models sheds light on the real problems of computer programming as well.
The questions asked in automata theory include these: Are there any problems that
no computer can solve, no matter how much time and memory it has? Is it possible to
PROVE that a particular computer program will actually solve a particular problem? If a computer can use two different external storage devices (disks or tapes) at the same time,does that extend the range of problems it can solve compared to a machine with only one such device?
There is also a larger question lurking in the background of automata theory: Does
the human mind solve problems in the same way that a computer does? Are people
subject to the same limitations as computers? Automata theory does not actually answer this question, but the insights of automata theory can be helpful in trying to work out an answer. We’ll have more to say about this in the chapter on artificial intelligence
Friday, January 8, 2010
DESCRIPTION ABOUT EVENT CLASSES IN JAVA (OBJECT ORIENTED PROG)
EVENT CLASSES
The classes that represent events are the core of java’s event handling mechanism .
thus we begin our study of event handling with a tour of the event classes .As our will see,they provide a consistent ,easy-to-use means of encapsulatating events.
At the root of java event class hierarchy is Eventobject, which is in java .util.It
Is the superclass for all events.its one constructor is show here
EventObject(Object src)
Here , src is the object that generates this event.
EventObject contain two methods: getsource() & toString().The getsource() method return the source of events . its general form show here:
Objectgetsource()
As expected string() return the string equivalent of the event
The class AWTEvent , define with in the java.awt package , is a subclass of EventObject.it is the superclass(either directly or indirectly) of all AWT-based event used by the delegation event model.its getID() method can be used ti determine the type of the event .the signature of this method is show here
Int getID()
At this point , it is important to know only that all of the other classes discusses in this section are subclasses of AWTEvent.
TO SUMMARIZE:
EventObject is a superclass of all event
AWTEvent is a superclass os all AWT event that are handled by the delegation event model.
The package java.awt.event define several types of event that are generated by various user interface elements .the table below enumerates the most important of these event classes & provide a breif description of when they are generated .The most commonly
used constructors,
EVENT ClASS DESCRIPTION
ACTIONEVENT Generated whan a button is presses,a list item
is double-clicked or a menu item is selected .
AdjustmentEvent Generated when a scroll bar is mainuplated.
ComponentEvent Generated when a component is hidden,moved,resizes,
Or become visible.
ContainerEvent Generated when a component is added to removes from a
Container.
FocusEvent Generated when a component gains or loses keyboard focus.
InputEvent Abstract super class for all component input event classes.
The classes that represent events are the core of java’s event handling mechanism .
thus we begin our study of event handling with a tour of the event classes .As our will see,they provide a consistent ,easy-to-use means of encapsulatating events.
At the root of java event class hierarchy is Eventobject, which is in java .util.It
Is the superclass for all events.its one constructor is show here
EventObject(Object src)
Here , src is the object that generates this event.
EventObject contain two methods: getsource() & toString().The getsource() method return the source of events . its general form show here:
Objectgetsource()
As expected string() return the string equivalent of the event
The class AWTEvent , define with in the java.awt package , is a subclass of EventObject.it is the superclass(either directly or indirectly) of all AWT-based event used by the delegation event model.its getID() method can be used ti determine the type of the event .the signature of this method is show here
Int getID()
At this point , it is important to know only that all of the other classes discusses in this section are subclasses of AWTEvent.
TO SUMMARIZE:
EventObject is a superclass of all event
AWTEvent is a superclass os all AWT event that are handled by the delegation event model.
The package java.awt.event define several types of event that are generated by various user interface elements .the table below enumerates the most important of these event classes & provide a breif description of when they are generated .The most commonly
used constructors,
EVENT ClASS DESCRIPTION
ACTIONEVENT Generated whan a button is presses,a list item
is double-clicked or a menu item is selected .
AdjustmentEvent Generated when a scroll bar is mainuplated.
ComponentEvent Generated when a component is hidden,moved,resizes,
Or become visible.
ContainerEvent Generated when a component is added to removes from a
Container.
FocusEvent Generated when a component gains or loses keyboard focus.
InputEvent Abstract super class for all component input event classes.
Thursday, January 7, 2010
DEFINATION OF LABEL CONTROL , STRING CONSTANT , VARIABLES
Label Control:
Currently the form does not contain any indication about what should be entered in the textbox. For such a purpose, Label control is used and the process of adding it is same. As in other controls studied so far “Text” property here is also used for indicating what should be displayed in the Label
String constant:
When you observe the above codes regarding message box you will identify that when displaying a predetermined set of characters (also known as String) in the message box, we used double quotes (“”). However when we wanted the contents of textbox which obviously depends upon the input of the user, we didn’t use the double quotes. Actually when any thing is written outside double quote it is considered to be something that the compiler/runtime should evaluate however things that are written in double quotes are considered as string constants
Variables:
Variables are place holders that can hold data values. For example age can be stored in a variable for further processing. Similarly Text property of TextBox is also a variable as it stores string values that are typed in the textbox.
Since C# is a strictly typed language therefore you must identify the data-type (e.g, integer, decimal, etc) while creating a variable. The process of creating a variable is known as declaration. Following is an example of declaring and integer variable. The example also includes statement to store a value in the declared variable which is known as assignment. When the variable is assigned a value first time, it is known as initialization.
The above statements can be combined into a single one which is indicated as follows
Variables are also created in classes but when created in classes they are known as properties. For example Microsoft created a variable named Text in class Label which is termed as property of label.
Currently the form does not contain any indication about what should be entered in the textbox. For such a purpose, Label control is used and the process of adding it is same. As in other controls studied so far “Text” property here is also used for indicating what should be displayed in the Label
String constant:
When you observe the above codes regarding message box you will identify that when displaying a predetermined set of characters (also known as String) in the message box, we used double quotes (“”). However when we wanted the contents of textbox which obviously depends upon the input of the user, we didn’t use the double quotes. Actually when any thing is written outside double quote it is considered to be something that the compiler/runtime should evaluate however things that are written in double quotes are considered as string constants
Variables:
Variables are place holders that can hold data values. For example age can be stored in a variable for further processing. Similarly Text property of TextBox is also a variable as it stores string values that are typed in the textbox.
Since C# is a strictly typed language therefore you must identify the data-type (e.g, integer, decimal, etc) while creating a variable. The process of creating a variable is known as declaration. Following is an example of declaring and integer variable. The example also includes statement to store a value in the declared variable which is known as assignment. When the variable is assigned a value first time, it is known as initialization.
The above statements can be combined into a single one which is indicated as follows
Variables are also created in classes but when created in classes they are known as properties. For example Microsoft created a variable named Text in class Label which is termed as property of label.
INTRO ABOUT IDE • Visual Studio • Solution Explorer • ToolBox IN . NET PROGRAMMING
IDE
An Integrated Development Environment (IDE) brings all of the programmers tools into one convenient place. There was a time when programmers had to edit files, save the files out, run the compiler, then the linker, build the application then run it through a debugger.
Today's IDEs bring editor, compiler, linker and debugger into one place along with project management tools to increase programmer productivity.
Visual Studio
Visual Studio is an IDE for building ASP.NET Web applications, XML Web Services, desktop applications, and mobile applications. Visual Basic, Visual C++, Visual C#, and Visual J# all use the same integrated development environment (IDE), which allows them to share tools and facilitates in the creation of mixed-language solutions
For creating new projects navigate to the menu indicated in the following figure 2.1
After clicking the above menu button following screen will be displayed from which you will have to select the project type and language. The figure 2.2 shows selection of “windows application” using C# language. After choosing the appropriate template i.e. “Windows Application” give a name to the project and specify a location on which you want the application to be stored.
Once you create the application a screen similar to the following will be created.
As can be observed that the IDE consists of a number of menus and windows. Let’s see the purpose of each window that can be seen on the screen. On the top left is the solution explorer. Figure 2.4 focuses on solution explorer only.
Solution Explorer allows you to view items and perform item management tasks in a solution or a project. It also allows you to use the Visual Studio editors to work on files outside the context of a solution or project. A single solution and its projects appear in a hierarchical display of solution explorer that provides updated information about the status of your solution, projects, and items. This allows you to work on several projects at the same time. Because the selected project and item determines the toolbar icons, this list is a partial representation of those you might encounter while working in Solution Explorer.
Properties
Displays the appropriate property user interface for the selected item in the tree view.
Show All Files
Shows all project items, including those that have been excluded and those that are normally hidden.
Refresh
Refreshes the state of the items in the selected project or solution.
View Class Diagram
Launches Class Designer to display a diagram of the classes in the current project. For more information, see Designing Classes and Types.
View Code
Opens the selected file for editing in the Code Editor.
View Designer
Opens the selected file for editing in the designer mode of the Code Editor.
Add New Solution Folder
Adds a Solution Folder to the selected item. You can add a Solution Folder to the solution or to an existing Solution Folder.
Apart from solution explores you can also see a properties window and a toolbox shown in the figure that are following.
The details of the above mentioned windows will be discussed later.
In figure 2.3 you must have observed the following at the center. This is the designer of form.
Tuesday, January 5, 2010
DEFINALTION & DESCRIPTION ABOUT Array:
Array:
Array is a very good example of how simple structures combine together to form a composite structure or type. (Composite data types are those that are made of other data types/structure).
An array is defined as a:
• List of values
• All of same type
• Individual elements identified by an index
• Variable holds the address of first element in the list
In the declaration of array an integer is passed which specifies the size of array. Whereas while using the array an integer is passed that indicates the index from which the value should be fetched. Indexes start from zero
Implementation of Array:
In memory arrays are allocated as contiguous blocks. As mentioned that for accessing individual elements it is required to pass an index. This index value multiplied by size of each block is then added to the address of first element of array in order to obtain the address of a particular block.
Multi-dimensional arrays
Arrays can have more dimensions, in which case they might be declared as:-
int results_2d[20][5];
int results_3d[20][5][3];
Each index has its own set of square brackets.
It is useful in describing an object that is physically two-dimensional, such as a map or a checkerboard. It is also useful in organizing a set of values that are dependent upon two (or more) inputs.
Array is a very good example of how simple structures combine together to form a composite structure or type. (Composite data types are those that are made of other data types/structure).
An array is defined as a:
• List of values
• All of same type
• Individual elements identified by an index
• Variable holds the address of first element in the list
In the declaration of array an integer is passed which specifies the size of array. Whereas while using the array an integer is passed that indicates the index from which the value should be fetched. Indexes start from zero
Implementation of Array:
In memory arrays are allocated as contiguous blocks. As mentioned that for accessing individual elements it is required to pass an index. This index value multiplied by size of each block is then added to the address of first element of array in order to obtain the address of a particular block.
Multi-dimensional arrays
Arrays can have more dimensions, in which case they might be declared as:-
int results_2d[20][5];
int results_3d[20][5][3];
Each index has its own set of square brackets.
It is useful in describing an object that is physically two-dimensional, such as a map or a checkerboard. It is also useful in organizing a set of values that are dependent upon two (or more) inputs.
WHAT IS THE DATA TYPES & HOW MANY TYPES OF DATA
Data Type:
A data type is composed of two things
• Range of possible values
• Set of operations that can be performed on data.
Initially languages provided a non-extensible data type system. But as the benefits of software technology became clear, people started to apply technology to solve complex and diversified set of problems, thus a need for extensible data-type system was felt.
By extensible it is meant that apart from using the built-in data types of a language, programmers can use these types to create their own data types such data types.
Some major categories of data types include:
Binary and Decimal Integers: (Converted into a bit string where the left most bit represents the sign of number).
Real Numbers: (In a 32-bit space, 24-bits represent the coefficient and 8 bit represent exponent)
Character Strings: (Also converted into bit strings but by means of encoding a specific bit-string is associated with a character. An eight bit binary sequence is capable of representing 255 characters)
Abstract Data Types (ADTs)
As mentioned above that a data type is a set of values and relevant operations. The collection of values and operations form a mathematical construct or represent a mathematical concept that may be implemented using a particular hardware or software data structure. The term “abstract data type” refers to the basic mathematical concept that defines the data type. The definition of ADT is not concerned with implementation details
Variable: variable is something whose value does not remain same. In order to ensure that machine understands the program correctly, a programmer must specify the data type of a variable.
Variables and Computers Memory: whatever value is assigned to a variable, it is converted into a bit string and stored into memory. By bit string we mean a series of “1s” and “0s”. (i.e. data is converted into binary). The computer’s memory is actually only capable of holding one of two values i.e. “1” which means “ON” (“presence of specific voltage”) and 0 which means OFF (“absence of specific voltage”)
Variable Declaration:
The statement that creates a variable is known as declaration. Declaration includes the data type and name of variable. Following is an example of declaration
int a;
The above statement communicates that “a” should have the data type “int”. i.e. it is capable of storing the range of data supported by “int” and all the operations of “int” can be applied on “a”. This statement will also cause memory to be allocated which will be referred by the variable “a”. Each memory location will have an address.
Address Operator can be used to retrieve the address of particular variable. e.g. “&a” will retrieve address of “a”
Pointers and Pointer Variable:
The address of a particular variable is often known as pointer and variables that hold the address of a pointer are known as pointer variable
A data type is composed of two things
• Range of possible values
• Set of operations that can be performed on data.
Initially languages provided a non-extensible data type system. But as the benefits of software technology became clear, people started to apply technology to solve complex and diversified set of problems, thus a need for extensible data-type system was felt.
By extensible it is meant that apart from using the built-in data types of a language, programmers can use these types to create their own data types such data types.
Some major categories of data types include:
Binary and Decimal Integers: (Converted into a bit string where the left most bit represents the sign of number).
Real Numbers: (In a 32-bit space, 24-bits represent the coefficient and 8 bit represent exponent)
Character Strings: (Also converted into bit strings but by means of encoding a specific bit-string is associated with a character. An eight bit binary sequence is capable of representing 255 characters)
Abstract Data Types (ADTs)
As mentioned above that a data type is a set of values and relevant operations. The collection of values and operations form a mathematical construct or represent a mathematical concept that may be implemented using a particular hardware or software data structure. The term “abstract data type” refers to the basic mathematical concept that defines the data type. The definition of ADT is not concerned with implementation details
Variable: variable is something whose value does not remain same. In order to ensure that machine understands the program correctly, a programmer must specify the data type of a variable.
Variables and Computers Memory: whatever value is assigned to a variable, it is converted into a bit string and stored into memory. By bit string we mean a series of “1s” and “0s”. (i.e. data is converted into binary). The computer’s memory is actually only capable of holding one of two values i.e. “1” which means “ON” (“presence of specific voltage”) and 0 which means OFF (“absence of specific voltage”)
Variable Declaration:
The statement that creates a variable is known as declaration. Declaration includes the data type and name of variable. Following is an example of declaration
int a;
The above statement communicates that “a” should have the data type “int”. i.e. it is capable of storing the range of data supported by “int” and all the operations of “int” can be applied on “a”. This statement will also cause memory to be allocated which will be referred by the variable “a”. Each memory location will have an address.
Address Operator can be used to retrieve the address of particular variable. e.g. “&a” will retrieve address of “a”
Pointers and Pointer Variable:
The address of a particular variable is often known as pointer and variables that hold the address of a pointer are known as pointer variable
INTRO ABOUT Data Structure: & Data Type:
Data Structure:
A data structure is a construct within a programming language that stores a collection of data.
Before moving ahead a number of concepts must be understood
Data Type:
A data type is composed of two things
• Range of possible values
• Set of operations that can be performed on data.
Initially languages provided a non-extensible data type system. But as the benefits of software technology became clear, people started to apply technology to solve complex and diversified set of problems, thus a need for extensible data-type system was felt.
By extensible it is meant that apart from using the built-in data types of a language, programmers can use these types to create their own data types such data types.
A data structure is a construct within a programming language that stores a collection of data.
Before moving ahead a number of concepts must be understood
Data Type:
A data type is composed of two things
• Range of possible values
• Set of operations that can be performed on data.
Initially languages provided a non-extensible data type system. But as the benefits of software technology became clear, people started to apply technology to solve complex and diversified set of problems, thus a need for extensible data-type system was felt.
By extensible it is meant that apart from using the built-in data types of a language, programmers can use these types to create their own data types such data types.
Monday, January 4, 2010
The article talks about the flu and what causes it and how to resist it using home remedies and some common sense health approach
Prepare for Avian Influenza!
Various U.S. and U.N. agencies and the Council on Foreign Relations
are spreading the word that the Avian Influenza, if it breaks out this fall or
winter, could be as severe as the worldwide Spanish Influenza epidemic
of 1918, and they are predicting hundreds of millions of deaths worldwide.
This influenza, currently isolated in China, is a hemorrhagic illness.
It kills half of its victims by rapidly depleting ascorbate (vitamin C)
stores in the body, inducing scurvy and collapse of the arterial
blood supply, causing internal hemorrhaging of the lungs
and sinus cavities.
Most people today have barely enough vitamin C in their bodies
(typically 60 mg per day) to prevent scurvy under normal living conditions,
and are not prepared for this kind of illness. (Vitamin C deficiency
is the root cause of many infant and childhood deaths worldwide,
and it is the root cause of Sudden Infant Death Syndrome - SIDS.)
The way to prepare yourself and protect your family from this influenza
is not a vaccine or anti-viral drug. If vaccines and/or anti-viral drugs are
offered to you, please refuse them. These actually reduce your immunity;
vaccines contain many toxic components, such as aluminum and
mercury, and anti-viral drugs interfere with critical body processes.
Historical evidence of vaccinations has shown that they actually
increase the chances of becoming severely ill. The best way to
prepare for influenza is by enhancing your immune system
and increasing the amount of vitamin C in your body.
The supplements I suggest below can be obtained at good quality
supplement stores such as Vitamin Shoppe or online at supplement
discounters although the powdered varieties are
generally only available online. Buy good quality supplements
such as those made by NOW Foods, Source Naturals, Jarrow,
Nature's Way, Vitamin Shoppe (store brand), etc. (These are not
endorsements, but suggestions based on my personal experience).
Do not use drug store supplements.
1. Begin increasing the amount of vitamin C that you take each day
to very high levels, spread over the course of the day, in divided doses
taken with meals. Start at 1000 mg per meal, and increase slowly to
2000-4000 mg per meal. (These are adult doses, modify by body weight
for children.) Your optimal dose is just below the point where your body
complains by giving you mild diarrhea.
This is called the "bowel tolerance dose."
Such doses are perfectly safe - vitamin C is natural to our bodies
and needed for many body processes. Most people don't get nearly
enough. Stock up on this vital nutrient - buy in powder form,
1-pound or 3-pound canisters (ascorbic acid form).
Mix with water or fruit juice. Be sure to take vitamin C with food
that will coat your stomach to prevent stomach upset,
such as organic soymilk.
2. Take 6000 mg of the amino acid lysine per day, 2000 mg per meal
(adult dose, modify by body weight for children).
Lysine is a natural protease inhibitor -
it prevents bacteria and viruses from spreading in your body.
You can obtain it in tablet, capsule, or powder form.
The latter form is the least expensive;
buy several pound containers of it.
3. Take a high-potency multivitamin/ multimineral tab, and a
calcium/magnesium supplement, every day.
4. Drink at least 2 quarts (8 cups, 2 liters) of non-caffeinated liquids
per day. Spring water and/or decaffeinated green tea made with
spring water are best. Do not drink diet soda or consume anything
with aspartame or other artificial sweeteners.
5. Stock up on other anti-viral agents and nutrients: l-proline and
l-glycine (amino acids - at least one pound of each, in powder form),
turmeric extract capsules, ginger capsules, garlic capsules,
15-mg zinc/1-mg copper capsules or tabs, oil of oregano
(Gaia Herbs brand is a good one), decaffeinated green tea extract,
N-Acetyl Cysteine (NAC), and non-gmo or organic soy protein drink
concentrate.
(Note to expectant mothers: Do not use oregano oil or green tea extract.)
6. In advance (right now), find a chelation or alternative health clinic
that is willing to administer intravenous vitamin C infusions.
This may be necessary if you are stricken by the Avian Flu and
find that you cannot keep up with it with the oral dosage.
(Refer to Dr. Robert Cathcart's intravenous vitamin C preparation
document if the clinic needs this information:
http://www.orthomed .com/civprep. htm )
7. If you do become ill, start increasing your vitamin C dosage
dramatically - your bowel tolerance dose will rise as it is used to
detoxify your body from the virus toxins; it may rise to as much as
100,000-200, 000 mg (100 to 200 grams) per day (adult dose).
Take up to 4000 mg per dose, with increased number of dosages.
Start taking 12,000 mg of each of l-lysine, l-proline and l-glycine per day,
in divided doses. Take 1000 mg oregano, 4000 mg turmeric extract,
4000 mg ginger, 4000 mg garlic, 45 mg zinc/3 mg copper,
1500 mg N-acetyl cysteine (NAC), and 2000 mg green tea per day,
in divided doses.
Increase fluid intake to 3-4 quarts (liters) per day. (These are the
adult doses, modify by weight for children.) Eat easily-digestable meals
complemented with 1/2-scoop soy protein shakes.
Continue this regimen until all signs of illness have subsided.
If the illness is not controlled by the regimen, obtain a series
(2-3 per week) of 30-gram Vitamin C intravenous infusions at a
chelation or alternative health clinic, with higher dosages if necessary
for pneumonia or previously compromised immunity
(e.g., AIDS or CFIDS). (Important: See note above regarding use
of this regimen during pregnancy; do not use oregano oil or green tea
extract during pregnancy.)
8. If you are currently taking Lipitor or another cholesterol- lowering (statin)
drug, stop taking it immediately. These drugs are very damaging to the
immune system. The above vitamin C and lysine regimen will (through a
completely different mechanism) naturally balance your cholesterol and
protect you from heart disease. If you continue to take a maintenance
dose of 6000 mg of vitamin C and 6000 mg of lysine per day,
you will never need to take statin drugs ever again.
(For more information on this, click here for an article about statin side
effects, and an excellent article at Dr. Mercola's website
http://www.mercola. com/article/ statins.htm )
9. You must take the regimen above every day, consistently.
After the danger period has passed, I recommend that you continue
the regimen at the level of 6000 mg vitamin C and 6000 mg lysine
per day (adult dose, modify by weight for children).
You will enjoy better health, lose fewer days to illness, and protect
yourself against heart disease. (If you choose not to continue the regimen,
please taper off gradually.) Use your stocks of anti-viral nutrients
for any illness you may encounter.
Various U.S. and U.N. agencies and the Council on Foreign Relations
are spreading the word that the Avian Influenza, if it breaks out this fall or
winter, could be as severe as the worldwide Spanish Influenza epidemic
of 1918, and they are predicting hundreds of millions of deaths worldwide.
This influenza, currently isolated in China, is a hemorrhagic illness.
It kills half of its victims by rapidly depleting ascorbate (vitamin C)
stores in the body, inducing scurvy and collapse of the arterial
blood supply, causing internal hemorrhaging of the lungs
and sinus cavities.
Most people today have barely enough vitamin C in their bodies
(typically 60 mg per day) to prevent scurvy under normal living conditions,
and are not prepared for this kind of illness. (Vitamin C deficiency
is the root cause of many infant and childhood deaths worldwide,
and it is the root cause of Sudden Infant Death Syndrome - SIDS.)
The way to prepare yourself and protect your family from this influenza
is not a vaccine or anti-viral drug. If vaccines and/or anti-viral drugs are
offered to you, please refuse them. These actually reduce your immunity;
vaccines contain many toxic components, such as aluminum and
mercury, and anti-viral drugs interfere with critical body processes.
Historical evidence of vaccinations has shown that they actually
increase the chances of becoming severely ill. The best way to
prepare for influenza is by enhancing your immune system
and increasing the amount of vitamin C in your body.
The supplements I suggest below can be obtained at good quality
supplement stores such as Vitamin Shoppe or online at supplement
discounters although the powdered varieties are
generally only available online. Buy good quality supplements
such as those made by NOW Foods, Source Naturals, Jarrow,
Nature's Way, Vitamin Shoppe (store brand), etc. (These are not
endorsements, but suggestions based on my personal experience).
Do not use drug store supplements.
1. Begin increasing the amount of vitamin C that you take each day
to very high levels, spread over the course of the day, in divided doses
taken with meals. Start at 1000 mg per meal, and increase slowly to
2000-4000 mg per meal. (These are adult doses, modify by body weight
for children.) Your optimal dose is just below the point where your body
complains by giving you mild diarrhea.
This is called the "bowel tolerance dose."
Such doses are perfectly safe - vitamin C is natural to our bodies
and needed for many body processes. Most people don't get nearly
enough. Stock up on this vital nutrient - buy in powder form,
1-pound or 3-pound canisters (ascorbic acid form).
Mix with water or fruit juice. Be sure to take vitamin C with food
that will coat your stomach to prevent stomach upset,
such as organic soymilk.
2. Take 6000 mg of the amino acid lysine per day, 2000 mg per meal
(adult dose, modify by body weight for children).
Lysine is a natural protease inhibitor -
it prevents bacteria and viruses from spreading in your body.
You can obtain it in tablet, capsule, or powder form.
The latter form is the least expensive;
buy several pound containers of it.
3. Take a high-potency multivitamin/ multimineral tab, and a
calcium/magnesium supplement, every day.
4. Drink at least 2 quarts (8 cups, 2 liters) of non-caffeinated liquids
per day. Spring water and/or decaffeinated green tea made with
spring water are best. Do not drink diet soda or consume anything
with aspartame or other artificial sweeteners.
5. Stock up on other anti-viral agents and nutrients: l-proline and
l-glycine (amino acids - at least one pound of each, in powder form),
turmeric extract capsules, ginger capsules, garlic capsules,
15-mg zinc/1-mg copper capsules or tabs, oil of oregano
(Gaia Herbs brand is a good one), decaffeinated green tea extract,
N-Acetyl Cysteine (NAC), and non-gmo or organic soy protein drink
concentrate.
(Note to expectant mothers: Do not use oregano oil or green tea extract.)
6. In advance (right now), find a chelation or alternative health clinic
that is willing to administer intravenous vitamin C infusions.
This may be necessary if you are stricken by the Avian Flu and
find that you cannot keep up with it with the oral dosage.
(Refer to Dr. Robert Cathcart's intravenous vitamin C preparation
document if the clinic needs this information:
http://www.orthomed .com/civprep. htm )
7. If you do become ill, start increasing your vitamin C dosage
dramatically - your bowel tolerance dose will rise as it is used to
detoxify your body from the virus toxins; it may rise to as much as
100,000-200, 000 mg (100 to 200 grams) per day (adult dose).
Take up to 4000 mg per dose, with increased number of dosages.
Start taking 12,000 mg of each of l-lysine, l-proline and l-glycine per day,
in divided doses. Take 1000 mg oregano, 4000 mg turmeric extract,
4000 mg ginger, 4000 mg garlic, 45 mg zinc/3 mg copper,
1500 mg N-acetyl cysteine (NAC), and 2000 mg green tea per day,
in divided doses.
Increase fluid intake to 3-4 quarts (liters) per day. (These are the
adult doses, modify by weight for children.) Eat easily-digestable meals
complemented with 1/2-scoop soy protein shakes.
Continue this regimen until all signs of illness have subsided.
If the illness is not controlled by the regimen, obtain a series
(2-3 per week) of 30-gram Vitamin C intravenous infusions at a
chelation or alternative health clinic, with higher dosages if necessary
for pneumonia or previously compromised immunity
(e.g., AIDS or CFIDS). (Important: See note above regarding use
of this regimen during pregnancy; do not use oregano oil or green tea
extract during pregnancy.)
8. If you are currently taking Lipitor or another cholesterol- lowering (statin)
drug, stop taking it immediately. These drugs are very damaging to the
immune system. The above vitamin C and lysine regimen will (through a
completely different mechanism) naturally balance your cholesterol and
protect you from heart disease. If you continue to take a maintenance
dose of 6000 mg of vitamin C and 6000 mg of lysine per day,
you will never need to take statin drugs ever again.
(For more information on this, click here for an article about statin side
effects, and an excellent article at Dr. Mercola's website
http://www.mercola. com/article/ statins.htm )
9. You must take the regimen above every day, consistently.
After the danger period has passed, I recommend that you continue
the regimen at the level of 6000 mg vitamin C and 6000 mg lysine
per day (adult dose, modify by weight for children).
You will enjoy better health, lose fewer days to illness, and protect
yourself against heart disease. (If you choose not to continue the regimen,
please taper off gradually.) Use your stocks of anti-viral nutrients
for any illness you may encounter.
Drawbacks of Procedural Programming:
Drawbacks of Procedural Programming:
• It is often the case that methods have dependencies between each other therefore when updating a function it might be required to update other dependencies as well. However if the dependencies are distributed in various modules then it will be a lengthy task to track and update the dependents accordingly. In order to avoid such situations the phenomenon of cohesion was adapted which suggested the grouping of only related functions into a module.
• Another problem in the procedural paradigm was that emphasis was only laid on the actions or functions, however the actual purpose for which computer programs are made is the storage and management of data, which was given 2nd class status in procedural programming.
• In addition to this there was another problem related to data that was irritating the developer community. Actually in functional programming every function had complete access to data; this resulted in chances that a function having poor logic will modify the important data in an unexpected way thus negatively affecting other functions as well.
• Another problem related to data was that if all functions had access to data and the storage pattern of data was changed then all the functions should be modified accordingly.
• It is often the case that methods have dependencies between each other therefore when updating a function it might be required to update other dependencies as well. However if the dependencies are distributed in various modules then it will be a lengthy task to track and update the dependents accordingly. In order to avoid such situations the phenomenon of cohesion was adapted which suggested the grouping of only related functions into a module.
• Another problem in the procedural paradigm was that emphasis was only laid on the actions or functions, however the actual purpose for which computer programs are made is the storage and management of data, which was given 2nd class status in procedural programming.
• In addition to this there was another problem related to data that was irritating the developer community. Actually in functional programming every function had complete access to data; this resulted in chances that a function having poor logic will modify the important data in an unexpected way thus negatively affecting other functions as well.
• Another problem related to data was that if all functions had access to data and the storage pattern of data was changed then all the functions should be modified accordingly.
what is C# Language:
C# Language:
C# is an elegant and type-safe object-oriented language that enables developers to build a wide range of secure and robust applications that run on the .NET framework. You can use C# to create miscellaneous type of applications including those that run on desktop, that are hosted by a web-server etc. C# is considered to be a highly expressive language, simple and easy to learn language. The syntax has been made similar to that of java and c++ thus programmers of these languages can learn C# within no time. However the syntax of C# simplifies many of the complexities that were there in C++. This simplification was done by java as well, however C# introduces a number of powerful features that are not available in java.
C# is an elegant and type-safe object-oriented language that enables developers to build a wide range of secure and robust applications that run on the .NET framework. You can use C# to create miscellaneous type of applications including those that run on desktop, that are hosted by a web-server etc. C# is considered to be a highly expressive language, simple and easy to learn language. The syntax has been made similar to that of java and c++ thus programmers of these languages can learn C# within no time. However the syntax of C# simplifies many of the complexities that were there in C++. This simplification was done by java as well, however C# introduces a number of powerful features that are not available in java.
How CLR Works IN Microsoft .NET
How CLR Works
The result of compiling a C# program is not an executable but it is MSIL
When you compile a C# program output containing MSIL is generated rather than native executable code. It is the job of the CLR to translate the intermediate code into executable code when a program is run. Thus, any program compiled to MSIL can be run in any environment for which the CLR is implemented. This is one of the factors that contribute towards the portability of .NET applications. It is also important to know that CLR converts Microsoft Intermediate Language into executable code by using a JIT compiler. This conversion, as the name of the compiler suggests, is on demand basis as each part of your program is needed.
Managed Code & Unmanaged Code:
Code that is executed by CLR is sometimes referred to as managed code, in contrast to unmanaged code which is the one that is compiled into native machine language rather than MSIL and targets the system directly rather than communicating with CLR.
INTRODUCTION ABOUT Microsoft .NET,common language runtime (CLR):,MSIL (Microsoft Intermediate Language): ,Metadata:
Microsoft .NET is actually a set of Microsoft software technologies for connecting information, people, systems, and devices. It enables a high level of software integration through the use of Web services that are small, discrete, building-block applications that connect to each other as well as to other, larger applications over the Internet.
.NET Framework 2.0 is a part of Microsoft .NET. To understand what is .NET Framework 2.0 lets first see that what is meant by the term framework.
Framework:
It is a set of assumptions, concepts, values, and practices that constitutes a way of viewing reality. An example of framework is “The Primary Framework for literacy and mathematics” developed by the “Department of Children, Schools and Families, UK”. This framework has been designed to support teachers and schools to deliver high quality learning and teaching for all children. It contains detailed guidance and materials including concepts, values and practices to support literacy and mathematics in primary schools and settings.
Software Frameworks: support the development of software by providing collection of items which are reusable as a group. They have carefully designed plug–points into which the user inserts code to customize or extend the framework.
The .NET framework thus is a set of concepts, values, practices and items that can be used to develop next generation of applications. The .NET framework has plug-In points defined that allow users to write code inorder to develop customized applications. The .NET framework has two main components,
The common language runtime (CLR): The CLR can be considered as an agent that manages code at execution time, providing core services such as memory management, thread management, and remoting, while also enforcing strict type safety and other forms of code accuracy that promote security and robustness.
The .NET framework class library: The class library is a comprehensive object oriented collection of reusable types/classes that can be used to develop a range of applications.
If we look into what enables the CLR to manage the code execution, we will come to the conclusion that a major role in performing this activity is played by metadata and MSIL (Microsoft Intermediate Language).
MSIL (Microsoft Intermediate Language):
When compiling to managed code, the compiler rather than producing an executable version of your code, translates your source into a psuedocode known as Microsoft Intermediate language (MSIL). MSIL is a CPU-independent set of instructions that can be efficiently converted to native code. MSIL defines a set of portable instructions that are independent of any specific CPU. In short, MSIL defines a portable assembly language
Metadata:
Compilers that target the runtime’s facilities must emit metadata in the compiled code that describes the types, members and references in the code. The runtime uses metadata to locate and load classes, layout instances in memory, resolve method invocations, generate native code, enforce security and set run-time context boundaries.
.NET Framework 2.0 is a part of Microsoft .NET. To understand what is .NET Framework 2.0 lets first see that what is meant by the term framework.
Framework:
It is a set of assumptions, concepts, values, and practices that constitutes a way of viewing reality. An example of framework is “The Primary Framework for literacy and mathematics” developed by the “Department of Children, Schools and Families, UK”. This framework has been designed to support teachers and schools to deliver high quality learning and teaching for all children. It contains detailed guidance and materials including concepts, values and practices to support literacy and mathematics in primary schools and settings.
Software Frameworks: support the development of software by providing collection of items which are reusable as a group. They have carefully designed plug–points into which the user inserts code to customize or extend the framework.
The .NET framework thus is a set of concepts, values, practices and items that can be used to develop next generation of applications. The .NET framework has plug-In points defined that allow users to write code inorder to develop customized applications. The .NET framework has two main components,
The common language runtime (CLR): The CLR can be considered as an agent that manages code at execution time, providing core services such as memory management, thread management, and remoting, while also enforcing strict type safety and other forms of code accuracy that promote security and robustness.
The .NET framework class library: The class library is a comprehensive object oriented collection of reusable types/classes that can be used to develop a range of applications.
If we look into what enables the CLR to manage the code execution, we will come to the conclusion that a major role in performing this activity is played by metadata and MSIL (Microsoft Intermediate Language).
MSIL (Microsoft Intermediate Language):
When compiling to managed code, the compiler rather than producing an executable version of your code, translates your source into a psuedocode known as Microsoft Intermediate language (MSIL). MSIL is a CPU-independent set of instructions that can be efficiently converted to native code. MSIL defines a set of portable instructions that are independent of any specific CPU. In short, MSIL defines a portable assembly language
Metadata:
Compilers that target the runtime’s facilities must emit metadata in the compiled code that describes the types, members and references in the code. The runtime uses metadata to locate and load classes, layout instances in memory, resolve method invocations, generate native code, enforce security and set run-time context boundaries.
THE HISTORY AND DEVELOPMENT OF MULTIMEDIA::A STORY OF INVENTION, INGENUITY AND VISION.
Today multimedia might be defined as the seamless digital integration of text, graphics, animation, audio, still images and motion video in a way that provides individual users with high levels of control and interaction. The evolution of Multimedia is a story of the emergence and convergence of these technologies.
As these technologies developed along separate paths for disparate purposes, visionaries saw the possibilities for the sum of the parts as well potential personal application in the broader societal context This chapter highlights visionaries and technological developments from the development of the printing press to the emergence of the WWW .
"The historian, with a vast chronological account of a people, parallels it with a skip trail which stops only at the salient items, and can follow at any time contemporary trails which lead him all over civilisation at a particular epoch. There is a new profession of trailblazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record. The inheritance from the master becomes, not only his additions to the world's record, but for his disciples the entire scaffolding by which they were erected." Vannevar Bush (1945).
This chapter is constructed around five themes developed over a time line. Presented within an interactive timeline framework, the reader has the option to pursue elaboration with a click of the mouse.
Visionaries: From the ingenious idea of the programmable computer, trace the innovations of the outstanding thinkers that had a direct impact on the explosion of the technological age.
Text, Processing and Software: Inventions and innovations that spawned the development of software enabling computers to move from mathematical processing to technology that creates and delivers multi media.
Computers: From the printing press through the exclusive military and academic and corporate worlds trace computer development into the ubiquitous role of the desktop personal computer of today.
Audio & Communication: From the telegraph signal to cellular telephones, follow the development from signal transmission to digital transmission of voice
Video &Animation: From manually manipulated negative film and hand drawn sketches, video and animation develops to sophisticated digital creation and rendering of motion
As these technologies developed along separate paths for disparate purposes, visionaries saw the possibilities for the sum of the parts as well potential personal application in the broader societal context This chapter highlights visionaries and technological developments from the development of the printing press to the emergence of the WWW .
"The historian, with a vast chronological account of a people, parallels it with a skip trail which stops only at the salient items, and can follow at any time contemporary trails which lead him all over civilisation at a particular epoch. There is a new profession of trailblazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record. The inheritance from the master becomes, not only his additions to the world's record, but for his disciples the entire scaffolding by which they were erected." Vannevar Bush (1945).
This chapter is constructed around five themes developed over a time line. Presented within an interactive timeline framework, the reader has the option to pursue elaboration with a click of the mouse.
Visionaries: From the ingenious idea of the programmable computer, trace the innovations of the outstanding thinkers that had a direct impact on the explosion of the technological age.
Text, Processing and Software: Inventions and innovations that spawned the development of software enabling computers to move from mathematical processing to technology that creates and delivers multi media.
Computers: From the printing press through the exclusive military and academic and corporate worlds trace computer development into the ubiquitous role of the desktop personal computer of today.
Audio & Communication: From the telegraph signal to cellular telephones, follow the development from signal transmission to digital transmission of voice
Video &Animation: From manually manipulated negative film and hand drawn sketches, video and animation develops to sophisticated digital creation and rendering of motion
Subscribe to:
Posts (Atom)