ADS

Thursday, 30 April 2015

Automatic test pattern generation


Automatic test pattern generation

ATPG (acronym for both Automatic Test Pattern Generation and Automatic Test Pattern Generator) is an electronic design automation method/technology used to find an input (or test) sequence that, when applied to a digital circuit, enables automatic test equipment to distinguish between the correct circuit behavior and the faulty circuit behavior caused by defects. The generated patterns are used to test semiconductor devices after manufacture, and in some cases to assist with determining the cause of failure (failure analysis.[1]) The effectiveness of ATPG is measured by the amount of modeled defects, or fault models, that are detected and the number of generated patterns. These metrics generally indicate test quality (higher with more fault detections) and test application time (higher with more patterns). ATPG efficiency is another important consideration. It is influenced by the fault model under consideration, the type of circuit under test (full scan, synchronous sequential, or asynchronous sequential), the level of abstraction used to represent the circuit under test (gate, register-transfer, switch), and the required test quality.
Basics of ATPG
A defect is an error caused in a device during the manufacturing process. A fault model is a mathematical description of how a defect alters design behavior. The logic values observed at the device's primary outputs, while applying a test pattern to some device under test (DUT), are called the output of that test pattern. The output of a test pattern, when testing a fault-free device that works exactly as designed, is called the expected output of that test pattern. A fault is said to be detected by a test pattern if the output of that test pattern, when testing a device that has only that one fault, is different than the expected output. The ATPG process for a targeted fault consists of two phases: fault activation and fault propagation. Fault activation establishes a signal value at the fault model site that is opposite of the value produced by the fault model. Fault propagation moves the resulting signal value, or fault effect, forward by sensitizing a path from the fault site to a primary output.
ATPG can fail to find a test for a particular fault in at least two cases. First, the fault may be intrinsically undetectable, such that no patterns exist that can detect that particular fault. The classic example of this is a redundant circuit, designed so that no single fault causes the output to change. In such a circuit, any single fault will be inherently undetectable.
Second, it is possible that a pattern(s) exist, but the algorithm cannot find it. Since the ATPG problem is NP-complete(by reduction from the Boolean satisfiability problem) there will be cases where patterns exist, but ATPG gives up since it will take an incredibly long time to find them (assuming P?NP, of course).
Fault models
Main article: Fault model
•             single fault assumption: only one fault occur in a circuit. if we define k possible fault types in our fault model the circuit has n signal lines, by single fault assumption, the total number of single faults is k×n.
•             multiple fault assumption: multiple faults may occur in a circuit.
Fault collapsing
It is possible that Two or more faults, produce same faulty behavior for all input patterns. these faults are called equivalent faults. Any single fault from the set of equivalent faults can represent the whole set. In this case, much less than k×n fault tests are required for a circuit with n signal line. Removing equivalent faults from entire set of faults is called fault collapsing.
The Stuck-at fault model
Main article: Stuck-at fault
In the past several decades, the most popular fault model used in practice is the single stuck-at fault model. In this model, one of the signal lines in a circuit is assumed to be stuck at a fixed logic value, regardless of what inputs are supplied to the circuit. Hence, if a circuit has n signal lines, there are potentially 2n stuck-at faults defined on the circuit, of which some can be viewed as being equivalent to others. The stuck-at fault model is a logical fault model because no delay information is associated with the fault definition. It is also called a permanent fault model because the faulty effect is assumed to be permanent, in contrast to intermittent faults which occur (seemingly) at random andtransient faults which occur sporadically, perhaps depending on operating conditions (e.g. temperature, power supply voltage) or on the data values (high or low voltage states) on surrounding signal lines. The single stuck-at fault model is structural because it is defined based on a structural gate-level circuit model.
A pattern set with 100% stuck-at fault coverage consists of tests to detect every possible stuck-at fault in a circuit. 100% stuck-at fault coverage does not necessarily guarantee high quality, since faults of many other kinds—such as bridging faults, opens faults, and transition (aka delay) faults—often occur.
Transistor faults
This model is used to describe faults for CMOS logic gates. At transistor level, a transistor maybe stuck-short or stuck-open. In stuck-short, a transistor behaves as it is always conducts (or stuck-on), and stuck-open is when a transistor never conducts current (or stuck-off). Stuck-short will produce a short between VDD and VSS.
Bridging faults
Main article: Bridging fault
A short circuit between two signal lines is called bridging faults. bridging to VDD or Vss is equivalent to stuck at fault model. Traditionally both signals after bridging were modeled with logic AND or OR of both signals. If one driver dominates the other driver in a bridging situation, the dominant driver forces the logic to the other one, in such case a dominant bridging fault is used. To better reflect the reality of CMOS VLSI devices, a Dominant AND or Dominant OR bridging fault model is used. in the latter case, dominant driver keeps its value, while the other one gets the AND or OR value of its own and the dominant driver.
Opens faults
Delay faults
Delay faults can be classified as:
•             Gate delay fault
•             Transition fault
•             Path delay fault: This fault is due to the sum of all gate propagation delays along a single path. This fault shows that the delay of one or more paths exceeds the clock period. one major problem in finding delay faults is the number of possible paths in a circuit under test (CUT), which in the worst case can grow exponentially with the number of lines n in the circuit.
Combinational ATPG
The combinational ATPG method allows testing the individual nodes (or flip-flops) of the logic circuit without being concerned with the operation of the overall circuit. During test, a so-called scan-mode is enabled forcing all flip-flops (FFs) to be connected in a simplified fashion, effectively bypassing their interconnections as intended during normal operation. This allows using a relatively simple vector matrix to quickly test all the comprising FFs, as well as to trace failures to specific FFs.
Sequential ATPG
Sequential-circuit ATPG searches for a sequence of vectors to detect a particular fault through the space of all possible vector sequences. Various search strategies and heuristics have been devised to find a shorter sequence and/or to find a sequence faster. However, according to reported results, no single strategy/heuristic out-performs others for all applications/circuits. This observation implies that a test generator should include a comprehensive set of heuristics.
Even a simple stuck-at fault requires a sequence of vectors for detection in a sequential circuit. Also, due to the presence of memory elements, the controllability and observability of the internal signals in a sequential circuit are in general much more difficult than those in a combinational logic circuit. These factors make the complexity of sequential ATPG much higher than that of combinational ATPG, where a scan-chain (i.e. switchable, for-test-only signal chain) is added to allow simple access to the individual nodes.
Due to the high complexity of the sequential ATPG, it remains a challenging task for large, highly sequential circuits that do not incorporate any Design For Testability (DFT) scheme. However, these test generators, combined with low-overhead DFT techniques such as partial scan, have shown a certain degree of success in testing large designs. For designs that are sensitive to area and/or performance overhead, the solution of using sequential-circuit ATPG and partial scan offers an attractive alternative to the popular full-scan solution, which is based on combinational-circuit ATPG.
ATPG and nanometer technologies
Historically, ATPG has focused on a set of faults derived from a gate-level fault model. As design trends move toward nanometer technology, new manufacture testing problems are emerging. During design validation, engineers can no longer ignore the effects of crosstalk and power supply noise on reliability and performance. Current fault modeling and vector-generation techniques are giving way to new models and techniques that consider timing information during test generation, that are scalable to larger designs, and that can capture extreme design conditions. For nanometer technology, many current design validation problems are becoming manufacturing test problems as well, so new fault-modeling and ATPG techniques will be needed.
Algorithmic methods
Testing very-large-scale integrated circuits with a high fault coverage is a difficult task because of complexity. Therefore many different ATPG methods have been developed to address combinational and sequential circuits.
•             Early test generation algorithms such as boolean difference and literal proposition were not practical to implement on a computer.
•             The D Algorithm was the first practical test generation algorithm in terms of memory requirements. The D Algorithm [proposed by Roth 1966] introduced D Notation which continues to be used in most ATPG algorithms. D Algorithm tries to propagate the stuck at fault value denoted by D (for SA0) or D (for SA1) to a primary output.
•             Path-Oriented Decision Making (PODEM) is an improvement over the D Algorithm. PODEM was created in 1981, by Prabhu Goel, when shortcomings in D Algorithm became evident when design innovations resulted in circuits that D Algorithm could not realize.
•             Fan-Out Oriented (FAN Algorithm) is an improvement over PODEM. It limits the ATPG search space to reduce computation time and accelerates backtracing.
•             Methods based on Boolean satisfiability are sometimes used to generate test vectors.
•             Pseudorandom test generation is the simplest method of creating tests. It uses a pseudorandom number generator to generate test vectors, and relies on logic simulation to compute good machine results, and fault simulation to calculate the fault coverage of the generated vectors.
•             Wavelet Automatic Spectral Pattern Generator (WASP) is an improvement over spectral algorithms for sequential ATPG. It uses wavelet heuristics to search space to reduce computation time and accelerate the compactor. It was put forward by Suresh kumar Devanathan from Rake Software and Michael Bushnell, Rutgers University. Suresh kumar Devanathan invented WASP as a part of his thesis at Rutgers.
ATPG is a topic that is covered by several conferences throughout the year. The primary US conferences are theInternational Test Conference and The VLSI Test Symposium, while in Europe the topic is covered by DATEand ETS.



Wednesday, 29 April 2015

Semiconductor device modeling


Semiconductor device modeling

Hierarchy of technology CAD tools building from the process level to circuits. Left side icons show typical manufacturing issues; right side icons reflect MOS scaling results based on TCAD. Credit: Prof. Robert Dutton in CRC Electronic Design Automation for IC Handbook, Vol II, Chapter 25, by permission.
Semiconductor device modeling creates models for the behavior of the electrical devices based on fundamental physics, such as the doping profiles of the devices. It may also include the creation of compact models (such as the well known SPICE transistor models), which try to capture the electrical behavior of such devices but do not generally derive them from the underlying physics. Normally it starts from the output of a semiconductor process simulation.
Introduction

The figure to the right provides a simplified conceptual view of “the big picture.” This figure shows two inverter stages and the resulting input-output voltage-time plot of the circuit. From the digital systems point of view the key parameters of interest are: timing delays, switching power, leakage current and cross-coupling (crosstalk) with other blocks. The voltage levels and transition speed are also of concern.
The figure also shows schematically the importance of Ion versus Ioff, which in turn is related to drive-current (and mobility) for the “on” device and several leakage paths for the “off” devices. Not shown explicitly in the figure are the capacitances—both intrinsic and parasitic—that affect dynamic performance.
The power scaling which is now a major driving force in the industry is reflected in the simplified equation shown in the figure — critical parameters are capacitance, power supply and clocking frequency. Key parameters that relate device behavior to system performance include the threshold voltage, driving current and subthreshold characteristics.
It is the confluence of system performance issues with the underlying technology and device design variables that results in the ongoing scaling laws that we now codify as Moore’s law.
Device modeling
Main articles: Diode modelling and Transistor models
The physics and modeling of devices in integrated circuits is dominated by MOS and bipolar transistor modeling. However, other devices are important, such as memory devices, that have rather different modeling requirements. There are of course also issues of reliability engineering—for example, electro-static discharge (ESD) protection circuits and devices—where substrate and parasitic devices are of pivotal importance. These effects and modeling are not considered by most device modeling programs; the interested reader is referred to several excellent monographs in the area of ESD and I/O modeling.[1][2][3]
Physics driven vs. compact models
Physics driven device modeling is intended to be accurate, but it is not fast enough for higher level tools, includingcircuit simulators such as SPICE. Therefore circuit simulators normally use more empirical models (often called compact models) that do not directly model the underlying physics. For example, inversion-layer mobility modeling, or the modeling of mobility and its dependence on physical parameters, ambient and operating conditions is an important topic both for TCAD (technology computer aided design) physical models and for circuit-level compact models. However, it is not accurately modeled from first principles, and so resort is taken to fitting experimental data. For mobility modeling at the physical level the electrical variables are the various scattering mechanisms, carrier densities, and local potentials and fields, including their technology and ambient dependencies. By contrast, at the circuit-level, models parameterize the effects in terms of terminal voltages and empirical scattering parameters. The two representations can be compared, but it is unclear in many cases how the experimental data is to be interpreted in terms of more microscopic behavior.
History
The evolution of technology computer-aided design (TCAD)--the synergistic combination of process, device and circuit simulation and modeling tools—finds its roots in bipolar technology, starting in the late 1960s, and the challenges of junction isolated, double-and triple-diffused transistors. These devices and technology were the basis of the first integrated circuits; nonetheless, many of the scaling issues and underlying physical effects are integral to IC design, even after four decades of IC development. With these early generations of IC, process variability and parametric yield were an issue—a theme that will reemerge as a controlling factor in future IC technology as well.
Process control issues—both for the intrinsic devices and all the associated parasitics—presented formidable challenges and mandated the development of a range of advanced physical models for process and device simulation. Starting in the late 1960s and into the 1970s, the modeling approaches exploited were dominantly one- and two-dimensional simulators. While TCAD in these early generations showed exciting promise in addressing the physics-oriented challenges of bipolar technology, the superior scalability and power consumption of MOS technology revolutionized the IC industry. By the mid-1980s, CMOS became the dominant driver for integrated electronics. Nonetheless, these early TCAD developments [4][5] set the stage for their growth and broad deployment as an essential toolset that has leveraged technology development through the VLSI and ULSI eras which are now the mainstream.
IC development for more than a quarter-century has been dominated by the MOS technology. In the 1970s and 1980s NMOS was favored owing to speed and area advantages, coupled with technology limitations and concerns related to isolation, parasitic effects and process complexity. During that era of NMOS-dominated LSI and the emergence of VLSI, the fundamental scaling laws of MOS technology were codified and broadly applied.[6] It was also during this period that TCAD reached maturity in terms of realizing robust process modeling (primarily one-dimensional) which then became an integral technology design tool, used universally across the industry.[7] At the same time device simulation, dominantly two-dimensional owing to the nature of MOS devices, became the work-horse of technologists in the design and scaling of devices.[8][9] The transition from NMOS to CMOS technology resulted in the necessity of tightly coupled and fully 2D simulators for process and device simulations. This third generation of TCAD tools became critical to address the full complexity of twin-well CMOS technology (see Figure 3a), including issues of design rules and parasitic effects such as latchup.[10][11] An abbreviated perspective of this period, through the mid-1980s, is given in;[12] and from the point of view of how TCAD tools were used in the design process, see.[13]


Monday, 27 April 2015

VARIABLE RESISTORS



Variable resistors

Adjustable resistors

A resistor may have one or more fixed tapping points so that the resistance can be changed by moving the connecting wires to different terminals. Some wirewound power resistors have a tapping point that can slide along the resistance element, allowing a larger or smaller part of the resistance to be used.
Where continuous adjustment of the resistance value during operation of equipment is required, the sliding resistance tap can be connected to a knob accessible to an operator. Such a device is called a rheostat and has two terminals.
Potentiometers

Main article: Potentiometer

A common element in electronic devices is a three-terminal resistor with a continuously adjustable tapping point controlled by rotation of a shaft or knob. These variable resistors are known as potentiometers when all three terminals are present, since they act as a continuously adjustable voltage divider. A common example is a volume control for a radio receiver.[16]
Accurate, high-resolution panel-mounted potentiometers (or "pots") have resistance elements typically wirewound on a helical mandrel, although some include a conductive-plastic resistance coating over the wire to improve resolution. These typically offer ten turns of their shafts to cover their full range. They are usually set with dials that include a simple turns counter and a graduated dial. Electronic analog computers used them in quantity for setting coefficients, and delayed-sweep oscilloscopes of recent decades included one on their panels.

Resistance decade boxes

A resistance decade box or resistor substitution box is a unit containing resistors of many values, with one or more mechanical switches which allow any one of various discrete resistances offered by the box to be dialed in. Usually the resistance is accurate to high precision, ranging from laboratory/calibration grade accuracy of 20 parts per million, to field grade at 1%. Inexpensive boxes with lesser accuracy are also available. All types offer a convenient way of selecting and quickly changing a resistance in laboratory, experimental and development work without needing to attach resistors one by one, or even stock each value. The range of resistance provided, the maximum resolution, and the accuracy characterize the box. For example, one box offers resistances from 0 to 100 megohms, maximum resolution 0.1 ohm, accuracy 0.1%.[17]

Special devices

There are various devices whose resistance changes with various quantities. The resistance of NTC thermistors exhibit a strong negative temperature coefficient, making them useful for measuring temperatures. Since their resistance can be large until they are allowed to heat up due to the passage of current, they are also commonly used to prevent excessive current surges when equipment is powered on. Similarly, the resistance of a humistor varies with humidity. One sort of photodetector, the photoresistor, has a resistance which varies with illumination.
The strain gauge, invented by Edward E. Simmons and Arthur C. Ruge in 1938, is a type of resistor that changes value with applied strain. A single resistor may be used, or a pair (half bridge), or four resistors connected in a Wheatstone bridgeconfiguration. The strain resistor is bonded with adhesive to an object that will be subjected to mechanical strain. With the strain gauge and a filter, amplifier, and analog/digital converter, the strain on an object can be measured.
A related but more recent invention uses a Quantum Tunnelling Composite to sense mechanical stress. It passes a current whose magnitude can vary by a factor of 1012 in response to changes in applied pressure.

Measurement

The value of a resistor can be measured with an ohmmeter, which may be one function of a multimeter. Usually, probes on the ends of test leads connect to the resistor. A simple ohmmeter may apply a voltage from a battery across the unknown resistor (with an internal resistor of a known value in series) producing a current which drives a meter movement. The current, in accordance with Ohm's law, is inversely proportional to the sum of the internal resistance and the resistor being tested, resulting in an analog meter scale which is very non-linear, calibrated from infinity to 0 ohms. A digital multimeter, using active electronics, may instead pass a specified current through the test resistance. The voltage generated across the test resistance in that case is linearly proportional to its resistance, which is measured and displayed. In either case the low-resistance ranges of the meter pass much more current through the test leads than do high-resistance ranges, in order for the voltages present to be at reasonable levels (generally below 10 volts) but still measurable.
Measuring low-value resistors, such as fractional-ohm resistors, with acceptable accuracy requires four-terminal connections. One pair of terminals applies a known, calibrated current to the resistor, while the other pair senses the voltage drop across the resistor. Some laboratory quality ohmmeters, especially milliohmmeters, and even some of the better digital multimeters sense using four input terminals for this purpose, which may be used with special test leads. Each of the two so-called Kelvin clips has a pair of jaws insulated from each other. One side of each clip applies the measuring current, while the other connections are only to sense the voltage drop. The resistance is again calculated using Ohm's Law as the measured voltage divided by the applied current.


Sunday, 26 April 2015

VARISTOR

Varistor

      varistor is an electronic component with an electrical resistivity that varies with the applied voltage.[1] Also known as a voltage-dependent resistor(VDR), it has a nonlinear, non-ohmic current–voltage characteristic that is similar to that of a diode. In contrast to a diode however, it has the same characteristic for both directions of traversing current. At low voltage it has a high electrical resistance which decreases as the voltage is raised.
      
   Varistors are used as control or compensation elements in circuits either to provide optimal operating conditions or to protect against excessive transientvoltages. When used as protection devices, they shunt the current created by the excessive voltage away from sensitive components when triggered.
   The development of the varistor, in the form of a new type of rectifier (copper oxide), originated in the work by L.O. Grondahl and P.H. Geiger in 1927.[2] The name varistor is a portmanteau of varying resistor. The term is only used for non-ohmic varying resistors. Variable resistors, such as the potentiometer and the rheostat, have ohmic characteristics.

Background
  
  The most common type of varistor is the metal-oxide varistor (MOV). This contains a ceramic mass ofzinc oxide grains, in a matrix of other metal oxides (such as small amounts of bismuth, cobalt, manganese) sandwiched between two metal plates (the electrodes). The boundary between each grain and its neighbour forms a diode junction, which allows current to flow in only one direction. The mass of randomly oriented grains is electrically equivalent to a network of back-to-back diode pairs, each pair in parallel with many other pairs.[3] When a small or moderate voltage is applied across the electrodes, only a tiny current flows, caused by reverse leakage through the diode junctions. When a large voltage is applied, the diode junction breaks down due to a combination ofthermionic emission and electron tunneling, and a large current flows. The result of this behaviour is a highly nonlinear current-voltage characteristic, in which the MOV has a high resistance at low voltages and a low resistance at high voltages.

   varistor remains non-conductive as a shunt-mode device during normal operation when the voltage across it remains well below its "clamping voltage", thus varistors are typically used for suppressing line voltage surges. However, a varistor may not be able to successfully limit a very large surge from an event like a lightning strike where the energy involved is many orders of magnitude greater than it can handle. Follow-through current resulting from a strike may generate excessive current that completely destroys the varistor. Lesser surges still degrade it, however. Degradation is defined by manufacturer's life-expectancy charts that relate current, time and number of transient pulses. The main parameter affecting varistor life expectancy is its energy (Joule) rating. As the energy rating increases, its life expectancy typically increases exponentially, the number of transient pulses that it can accommodate increases and the "clamping voltage" it provides during each transient decreases. The probability of catastrophic failure can be reduced by increasing the rating, either by using a single varistor of higher rating or by connecting more devices in parallel. A varistor is typically deemed to be fully degraded when its "clamping voltage" has changed by 10%. In this condition it is not visibly damaged and it remains functional (no catastrophic failure).
In general, the primary case of varistor breakdown is localized heating caused as an effect of thermal runaway. This is due to a lack of conformity in individual grain-boundary junctions, which leads to the failure of dominant current paths under thermal stress. If the energy in a transient pulse (normally measured in joules) is too high, the device may melt, burn, vaporize, or otherwise be damaged or destroyed. This (catastrophic) failure occurs when "Absolute Maximum Ratings" in manufacturer's data-sheet are significantly exceeded.
Important parameters are the varistor's energy rating in joules, operating voltage, response time, maximum current, and breakdown (clamping) voltage. Energy rating is often defined using standardized transients such as 8/20 microseconds or 10/1000 microseconds, where 8 microseconds is the transient's front time and 20 microseconds is the time to half value. To protect communications lines (such astelephone lines) transient suppression devices such as 3 mil carbon blocks (IEEE C62.32), ultra-low capacitance varistors or avalanche diodes are used. For higher frequencies such as radio communication equipment, a gas discharge tube (GDT) may be utilized. A typical surge protector power strip is built using MOVs. The cheapest kind may use just one varistor, from hot (live, active) to neutral. A better protector would contain at least three varistors; one across each of the three pairs of conductors (hot-neutral, hot-ground, neutral-ground). A power strip protector in the United States should have a UL1449 3rd edition approval so that catastrophic MOV failure would not create a fire hazard.

Specifications

  The response time of the MOV is not standardized. The sub-nanosecond MOV response claim is based on the material's intrinsic response time, but will be slowed down by other factors such as the inductance of component leads and the mounting method. That response time is also qualified as insignificant when compared to a transient having an 8 µs rise-time, thereby allowing ample time for the device to slowly turn-on. When subjected to a very fast, <1 ns rise-time transient, response times for the MOV are in the 40–60 ns range.[4]

   Typical capacitance for consumer-sized (7–20 mm diameter) varistors are in the range of 100–1,000 pF. Smaller, lower-capacitance varistors are available with capacitance of ~1 pF for microelectronic protection, such as in cellular phones. These low-capacitance varistors are, however, unable to withstand large surge currents simply due to their compact PCB-mount size. MOVs are specified according to the voltage range that they can tolerate without damage.

Hazards

  While an MOV is designed to conduct significant power for very short durations (about 8 to 20 microseconds), such as caused by lightning strikes, it typically does not have the capacity to conduct sustained energy. Under normal utility voltage conditions, this is not a problem. However, certain types of faults on the utility power grid can result in sustained over-voltage conditions. Examples include a loss of a neutral conductor or shorted lines on the high voltage system. Application of sustained over-voltage to a MOV can cause high dissipation, potentially resulting in the MOV device catching fire. TheNational Fire Protection Association (NFPA) has documented many cases of catastrophic fires that have been caused by MOV devices in surge suppressors, and has issued bulletins on the issue.[citation needed]
A series connected thermal fuse is one solution to catastrophic MOV failure. Varistors with internal thermal protection are also available.

  There are several issues to be noted regarding behavior of transient voltage surge suppressors (TVSS) incorporating MOVs under over-voltage conditions. Depending on the level of conducted current, dissipated heat may be insufficient to cause failure, but may degrade the MOV device and reduce its life expectancy. If excessive current is conducted by a MOV, it may fail catastrophically, keeping the load connected, but now without any surge protection. A user may have no indication when the surge suppressor has failed. Under the right conditions of over-voltage and line impedance, it may be possible to cause the MOV to burst into flames,[5] the root cause of many fires[6] and the main reason for NFPA’s concern resulting in UL1449 in 1986 and subsequent revisions in 1998 and 2009. Properly designed TVSS devices must not fail catastrophically, resulting in the opening of a thermal fuse or something equivalent that only disconnects MOV devices.
Varistor limitations
A MOV inside a TVSS device does not provide equipment with complete power protection. In particular, a MOV device provides no protection for the connected equipment from sustained over-voltages that may result in damage to that equipment as well as to the protector device. Other sustained and harmful overvoltages may be lower and therefore ignored by a MOV device.
A varistor provides no equipment protection from inrush current surges (during equipment startup), from overcurrent (created by a short circuit), or from voltage sags (also known as a brownout); it neither senses nor affects such events. Susceptibility of electronic equipment to these other power disturbances is defined by other aspects of the system design, either inside the equipment itself or externally by means such as a UPS, a voltage regulator or a surge protector with built-in overvoltage protection (which typically consists of a voltage-sensing circuit and a relay for disconnecting the AC input when the voltage reaches a danger threshold).
Varistors compared to other transient suppressors
  
   Another method for suppressing voltage spikes is the transient-voltage-suppression diode (TVS). Although diodes do not have as much capacity to conduct large surges as MOVs, diodes are not degraded by smaller surges and can be implemented with a lower "clamping voltage". MOVs degrade from repeated exposure to surges[7] and generally have a higher "clamping voltage" so that leakage does not degrade the MOV. Both types are available over a wide range of voltages. MOVs tend to be more suitable for higher voltages, because they can conduct the higher associated energies at less cost.[8]

Another type of transient suppressor is the gas-tube suppressor. This is a type of spark gap that may use air or an inert gasmixture and often, a small amount of radioactive material such as Ni-63, to provide a more consistent breakdown voltage and reduce response time. Unfortunately, these devices may have higher breakdown voltages and longer response times than varistors. However, they can handle significantly higher fault currents and withstand multiple high-voltage hits (for example, from lightning) without significant degradation.
Multi-layer varistor

Multi-layer varistor (MLV) devices provide electrostatic discharge protection to electronic circuits from low to mediumenergy transients in sensitive equipment operating at 0-120 volts dc. They have peak current ratings from about 20 to 500 amperes, and peak energy ratings from 0.05 to 2.5 joules.[citation needed]


S-VIDEO


S-Video

A standard 4-pin S-Video cable connector, with each signal pin paired with its own ground pin.
Type   Analog video connector
General specifications
Hot pluggable          Yes
External         Yes
Video signal NTSC, PAL, or SECAM video
Pins    4, 7, or 9
Connector    Mini-DIN connector
Pin out

Looking at the female connector
Pin 1   GND   Ground (Y)  
Pin 2   GND   Ground (C)  
Pin 3   Y          Intensity (Luminance)     
Pin 4   C          Color (Chrominance)       
The shells should be connected together by an overall screen/shield. However, the shield is often absent in low-end cables, which can result in picture degradation.
Separate Video,[1] commonly termed S-Video, Super-Video andY/C, is a signaling standard for standard definition video, typically480i or 576i. By separating the black-and-white and coloring signals, it achieves better image quality than composite video, but has lower color resolution than component video.
Contents

            1 Signal
            2 Use
            3 Physical connectors
            3.1 Non-4-pin variants
            3.1.1 7-pin mini-DIN
            4 Gallery
            5 See also
            6 Notes
            7 References
Signal
The S-video cable carries video using two synchronized signal and ground pairs, termed Y and C.
Y is the luma signal, which carries the luminance - or black-and-white - of the picture, including synchronization pulses.
C is the chroma signal, which carries the chrominance - or coloring-in - of the picture. This signal contains both the saturation and the hue of the video.
The luminance signal carries horizontal and vertical sync pulses in the same way as a composite video signal. Luma is a signal carrying luminance after gamma correction, and is therefore termed "Y" because of the similarity to the lower-case Greek letter gamma.
In composite video, the signals co-exist on different frequencies. To achieve this, the luminance signal must be low-pass filtered, dulling the image. As S-Video maintains the two as separate signals, such detrimental low-pass filtering for luminance is unnecessary, although the chrominance signal still has limited bandwidth relative to component video.
Compared with component video, which carries the identical luminance signal but separates the color-difference signals into Cb/Pb and Cr/Pr, the color resolution of S-Video is limited by the modulation on a subcarrier frequency of 3.57 to 4.43 Megahertz, depending on the standard. It is worth noting that this difference is meaningless on consumer videotape systems, as the chrominance is already severely constrained by both VHS and Betamax.
Carrying the color information as one signal means that the color has to be encoded in some way, typically in accord with NTSC, PAL, or SECAM, depending on the applicable local standard.
Also, S-Video suffers from low color resolution. NTSC S-Video color resolution is typically 120 lines horizontal (approximately 160 pixels edge-to-edge),[citation needed] versus 250 lines horizontal for the Rec. 601-encoded signal of a DVD, or 30 lines horizontal for standard VCRs.
Use
In many European Union countries, S-Video is less common because of the dominance of SCART, usually fitted to every TV. It is not usual to find S-Video outputs on video equipment, although the player may output S-Video over SCART, but even then the TV may not be compatible with S-Video wired this way, and as such it would just show a monochrome image.[2] In this case it is sometimes possible to modify the SCART adapter cable to make it work.
In PAL territories, games consoles usually do not output S-Video, and although the majority of TVs featured SCART sockets, no console ever came with an RGB SCART cable packed in (it had to be purchased separately) generally coming with RF adapters at first, and the equally uncommon composite video using the classic RCA type video jack. Sony's game systems were provided with a composite to scart adapter which just like VHS, only outputs composite video over SCART, (RGB cables had to be purchased separately). In the US and some other NTSC countries, S-Video was provided but no RGB. In Japan instead a special type of RGB cable similar to SCART in the looks, but with different pin out, was often available (Sony's games systems also had a special RGB cable available to connect the systems to selected Sony TVs). The Nintendo 64 was a special case – NTSC models could output S-Video, but only with modification they would output RGB. PAL Nintendo 64 models could output S-Video but not RGB, despite that being the easiest way to connect if done via SCART.
Physical connectors
The four-pin mini-DIN connector is the most common of several S-Video connector types. Other connector variants include seven-pin locking "dub" connectors used on many professional S-VHS machines, and dual "Y" and "C" BNC connectors, often used for S-Video patch panels. Early Y/C video monitors often used phono (RCA connector) that were switchable between Y/C and composite video input. Though the connectors are different, the Y/C signals for all types are compatible.
JVC introduced the DIN-connector as both an S-VHS connector[3] and as Super Video.[4]
The mini-DIN pins, being weak, sometimes bend. This can result in the loss of colour or other corruption (or loss) in the signal. A bent pin can be forced back into shape, but this carries the risk of the pin breaking off.
Non-4-pin variants
These plugs are usually made to be plug-compatible with S-video, and include optional features, such as component video using an adapter. They are not necessarily S-video, although they can be operated in that mode.
7-pin mini-DIN
Non-standard 7-pin mini-DIN connectors (termed "7P") are used in some computer equipment (PCs and Macs). A 7-pin socket accepts, and is pin compatible, with a standard 4-pin S-Video plug.[5] The three extra sockets may be used to supply composite (CVBS), an RGB or YPbPr video signal, or an I²C interface. Thepinout usage varies among manufacturers.[5][6] In some implementations, the remaining pin must be grounded to enable the composite output or disable the S-Video output.
Some Dell laptops have a digital audio output in a 7-pin socket.[7]
9-pin Video In/Video Out

9-pin connectors are used in graphics systems that feature the ability to input video as well as output it.[8][9] Again, there is no standardization between manufacturers as to which pin does what, and there are two known variants of the connector in use. As can be seen from the diagram above, although the S-Video signals are available on the corresponding pins, neither variant of the connector will accept an unmodified 4-pin S-Video plug, though they can be made to fit by removing the key from the plug. In the latter case, it becomes all too easy to misalign the plug when inserting it with consequent damage to the small pins.
See also
•          Audio and video connector
•          RF connector
•          Composite monitor
•          List of video connectors
•          Video In Video Out (VIVO)


Thursday, 23 April 2015

SOLAR WATER HETERS


Solar water heaters

Direct-gain solar heater panels with integrated storage tank

Flat-plate solar thermal collector, viewed from roof-level
Main article: Solar water heating
Increasingly, solar powered water heaters are being used. Their solar collectors are installed outside dwellings, typically on the roof or walls or nearby, and the potable hot water storage tank is typically a pre-existing or new conventional water heater, or a water heater specifically designed for solar thermal.
The most basic solar thermal models are the direct-gain type, in which the potable water is directly sent into the collector. Many such systems are said to use integrated collector storage (ICS), as direct-gain systems typically have storage integrated within the collector. Heating water directly is inherently more efficient than heating it indirectly via heat exchangers, but such systems offer very limited freeze protection (if any), can easily heat water to temperatures unsafe for domestic use, and ICS systems suffer from severe heat loss on cold nights and cold, cloudy days.
By contrast, indirect or closed-loop systems do not allow potable water through the panels, but rather pump a heat transfer fluid (either water or a water/antifreeze mix) through the panels. After collecting heat in the panels, the heat transfer fluid flows through a heat exchanger, transferring its heat to the potable hot water. When the panels are cooler than the storage tank or when the storage tank has already reached its maximum temperature, the controller in closed-loop systems will stop the circulation pumps. In a drainback system, the water drains into a storage tank contained in conditioned or semi-conditioned space, protected from freezing temperatures. With antifreeze systems, however, the pump must be run if the panel temperature gets too hot (to prevent degradation of the antifreeze) or too cold (to prevent the water/antifreeze mixture from freezing.)
Flat panel collectors are typically used in closed-loop systems. Flat panels, which often resemble skylights, are the most durable type of collector, and they also have the best performance for systems designed for temperatures within 56 °C (100 °F) of ambient temperature. Flat panels are regularly used in both pure water and antifreeze systems.
Another type of solar collector is the evacuated tube collector, which are intended for cold climates that do not experience severe hail and/or applications where high temperatures are needed (i.e., over 94 °C [201 °F]). Placed in a rack, evacuated tube collectors form a row of glass tubes, each containing absorption fins attached to a central heat-conducting rod (copper or condensation-driven). The evacuated description refers to the vacuum created in the glass tubes during the manufacturing process, which results in very low heat loss and lets evacuated tube systems achieve extreme temperatures, far in excess of water's boiling point.
Geothermal heating
In countries like Iceland and New Zealand, and other volcanic regions, water heating may be done using geothermal heating, rather than combustion.
Gravity-fed system
Where a space-heating water boiler is employed, the traditional arrangement in the UK is to use boiler-heated (primary) water to heat potable (secondary) water contained in a cylindrical vessel (usually made of copper)—which is supplied from a cold water storage vessel or container, usually in the roof space of the building. This produces a fairly steady supply of DHW (Domestic Hot Water) at low static pressure head but usually with a good flow. In most other parts of the world, water heating appliances do not use a cold water storage vessel or container, but heat water at pressures close to that of the incoming mains water supply.
Point-of-use (POU) vs. Centralized hot water
A locational design decision may be made between point-of-use and centralized water heaters. Centralized water heaters are more traditional, and are still a good choice for small buildings. For larger buildings with intermittent or occasional hot water use, multiple POU water heaters may be a better choice, since they can reduce long waits for hot water to arrive from a remote heater. The decision where to locate the water heater(s) is only partially independent of the decision of a tanked vs. tankless water heater, or the choice of energy source for the heat.
Other improvements
Other improvements include check valve devices at their inlet and outlet, cycle timers, electronic ignition in the case of fuel-using models, sealed air intake systems in the case of fuel-using models, and pipe insulation. The sealed air-intake system types are sometimes called "band-joist" intake units. "High-efficiency" condensing units can convert up to 98% of the energy in the fuel to heating the water. The exhaust gases of combustion are cooled and are mechanically ventilated either through the roof or through an exterior wall. At high combustion efficiencies a drain must be supplied to handle the water condensed out of the combustion products, which are primarily carbon dioxide and water vapor.
In traditional plumbing in the UK, the space-heating boiler is set up to heat a separate hot water cylinder or water heater for potable hot water. Such water heaters are often fitted with an auxiliary electrical immersion heater for use if the boiler is out of action for a time. Heat from the space-heating boiler is transferred to the water heater vessel/container by means of a heat exchanger, and the boiler operates at a higher temperature than the potable hot water supply. Most potable water heaters in North America are completely separate from the space heating units, due to the popularity of HVAC/forced air systems in North America.
Residential combustion water heaters manufactured since 2003 in the United States have been redesigned to resist ignition of flammable vapors and incorporate a thermal cutoff switch, per ANSI Z21.10.1. The first feature attempts to prevent vapors from flammable liquids and gasses in the vicinity of the heater from being ignited and thus causing a house fire or explosion. The second feature prevents tank overheating due to unusual combustion conditions. These safety requirements were made based on homeowners storing, or spilling, gasoline or other flammable liquids near their water heaters and causing fires. Since most of the new designs incorporate some type of flame arrestor screen, they require monitoring to make sure they don't become clogged with lint or dust, reducing the availability of air for combustion. If the flame arrestor becomes clogged, the thermal cutoff may act to shut down the heater.
A wetback stove (NZ), wetback heater (NZ), or back boiler (UK), is a simple household secondary water heater using incidental heat. It typically consists of a hot water pipe running behind a fireplace or stove (rather than hot water storage), and has no facility to limit the heating. Modern wetbacks may run the pipe in a more sophisticated design to assist heat-exchange. These designs are being forced out by government efficiency regulations that do not count the energy used to heat water as 'efficiently' used.[3]


TRANSISTOR SWITCH


The transistor as a switch

Because a transistor's collector current is proportionally limited by its base current, it can be used as a sort of current-controlled switch. A relatively small flow of electrons sent through the base of the transistor has the ability to exert control over a much larger flow of electrons through the collector.
Suppose we had a lamp that we wanted to turn on and off with a switch. Such a circuit would be extremely simple as in Figurebelow(a).
For the sake of illustration, let's insert a transistor in place of the switch to show how it can control the flow of electrons through the lamp. Remember that the controlled current through a transistor must go between collector and emitter. Since it is the current through the lamp that we want to control, we must position the collector and emitter of our transistor where the two contacts of the switch were. We must also make sure that the lamp's current will move against the direction of the emitter arrow symbol to ensure that the transistor's junction bias will be correct as in Figure below(b).


(a) mechanical switch, (b) NPN transistor switch, (c) PNP transistor switch.
A PNP transistor could also have been chosen for the job. Its application is shown in Figure above(c).
The choice between NPN and PNP is really arbitrary. All that matters is that the proper current directions are maintained for the sake of correct junction biasing (electron flow going against the transistor symbol's arrow).
Going back to the NPN transistor in our example circuit, we are faced with the need to add something more so that we can have base current. Without a connection to the base wire of the transistor, base current will be zero, and the transistor cannot turn on, resulting in a lamp that is always off. Remember that for an NPN transistor, base current must consist of electrons flowing from emitter to base (against the emitter arrow symbol, just like the lamp current). Perhaps the simplest thing to do would be to connect a switch between the base and collector wires of the transistor as in Figure below (a).

Transistor: (a) cutoff, lamp off; (b) saturated, lamp on.
If the switch is open as in Figure above (a), the base wire of the transistor will be left “floating” (not connected to anything) and there will be no current through it. In this state, the transistor is said to be cutoff. If the switch is closed as in Figure above (b), electrons will be able to flow from the emitter through to the base of the transistor, through the switch, up to the left side of the lamp, back to the positive side of the battery. This base current will enable a much larger flow of electrons from the emitter through to the collector, thus lighting up the lamp. In this state of maximum circuit current, the transistor is said to be saturated.
Of course, it may seem pointless to use a transistor in this capacity to control the lamp. After all, we're still using a switch in the circuit, aren't we? If we're still using a switch to control the lamp -- if only indirectly -- then what's the point of having a transistor to control the current? Why not just go back to our original circuit and use the switch directly to control the lamp current?
Two points can be made here, actually. First is the fact that when used in this manner, the switch contacts need only handle what little base current is necessary to turn the transistor on; the transistor itself handles most of the lamp's current. This may be an important advantage if the switch has a low current rating: a small switch may be used to control a relatively high-current load. More importantly, the current-controlling behavior of the transistor enables us to use something completely different to turn the lamp on or off. Consider Figure below, where a pair of solar cells provides 1 V to overcome the 0.7 VBE of the transistor to cause base current flow, which in turn controls the lamp.

Solar cell serves as light sensor.
Or, we could use a thermocouple (many connected in series) to provide the necessary base current to turn the transistor on in Figure below.

A single thermocouple provides less than 40 mV. Many in series could produce in excess of the 0.7 V transistor VBE to cause base current flow and consequent collector current to the lamp.
Even a microphone (Figure below) with enough voltage and current (from an amplifier) output could turn the transistor on, provided its output is rectified from AC to DC so that the emitter-base PN junction within the transistor will always be forward-biased:

Amplified microphone signal is rectified to DC to bias the base of the transistor providing a larger collector current.
The point should be quite apparent by now: any sufficient source of DC current may be used to turn the transistor on, and that source of current only need be a fraction of the current needed to energize the lamp. Here we see the transistor functioning not only as a switch, but as a true amplifier: using a relatively low-power signal to control a relatively large amount of power. Please note that the actual power for lighting up the lamp comes from the battery to the right of the schematic. It is not as though the small signal current from the solar cell, thermocouple, or microphone is being magically transformed into a greater amount of power. Rather, those small power sources are simply controlling the battery's power to light up the lamp.


Tuesday, 21 April 2015

HISTORY AC


History
The first alternator to produce alternating current was a dynamo electric generator based on Michael Faraday's principles constructed by the French instrument maker Hippolyte Pixii in 1832.[4] Pixii later added a commutator to his device to produce the (then) more commonly used direct current. The earliest recorded practical application of alternating current is by Guillaume Duchenne, inventor and developer of electrotherapy. In 1855, he announced that AC was superior to direct currentfor electrotherapeutic triggering of muscle contractions.[5]
Alternating current technology had first developed in Europe due to the work of Guillaume Duchenne (1850s), The Hungarian Ganz Works (1870s), Sebastian Ziani de Ferranti (1880s), Lucien Gaulard, and Galileo Ferraris.
In 1876, Russian engineer Pavel Yablochkov invented a lighting system based on a set of induction coils where the primary windings were connected to a source of AC. The secondary windings could be connected to several 'electric candles' (arc lamps) of his own design.[6][7] The coils Yablochkov employed functioned essentially as transformers.[6]
In 1878, the Ganz factory, Budapest, Hungary, began manufacturing equipment for electric lighting and, by 1883, had installed over fifty systems in Austria-Hungary. Their AC systems used arc and incandescent lamps, generators, and other equipment.[8]
A power transformer developed by Lucien Gaulard and John Dixon Gibbs was demonstrated in London in 1881, and attracted the interest of Westinghouse. They also exhibited the invention in Turin in 1884.
DC distribution systems
During the initial years of electricity distribution, Edison's direct current was the standard for the United States, and Edison did not want to lose all his patent royalties.[9] Direct current worked well with incandescent lamps, which were the principal load of the day, and with motors. Direct-current systems could be directly used with storage batteries, providing valuable load-leveling and backup power during interruptions of generator operation. Direct-current generators could be easily paralleled, allowing economical operation by using smaller machines during periods of light load and improving reliability. At the introduction of Edison's system, no practical AC motor was available. Edison had invented a meter to allow customers to be billed for energy proportional to consumption, but this meter worked only with direct current.
The principal drawback of direct-current distribution was that customer loads, distribution and generation were all at the same voltage. Generally, it was uneconomical to use a high voltage for transmission and reduce it for customer uses. Even with the Edison 3-wire system (placing two 110-volt customer loads in series on a 220-volt supply), the high cost of conductors required generation to be close to customer loads, otherwise losses made the system uneconomical to operate.
Transformers
Alternating current systems can use transformers to change voltage from low to high level and back, allowing generation and consumption at low voltages but transmission, possibly over great distances, at high voltage, with savings in the cost of conductors and energy losses.
A bipolar open-core power transformer developed by Lucien Gaulard and John Dixon Gibbs was demonstrated in London in 1881, and attracted the interest ofWestinghouse. They also exhibited the invention in Turin in 1884. However these early induction coils with open magnetic circuits are inefficient at transferring power to loads. Until about 1880, the paradigm for AC power transmission from a high voltage supply to a low voltage load was a series circuit. Open-core transformers with a ratio near 1:1 were connected with their primaries in series to allow use of a high voltage for transmission while presenting a low voltage to the lamps. The inherent flaw in this method was that turning off a single lamp (or other electric device) affected the voltage supplied to all others on the same circuit. Many adjustable transformer designs were introduced to compensate for this problematic characteristic of the series circuit, including those employing methods of adjusting the core or bypassing the magnetic flux around part of a coil.[10]
The direct current systems did not have these drawbacks, giving it significant advantages over early AC systems.
Pioneers
 
The prototype of ZBD. Transformer is on display at the Széchenyi István Memorial Exhibition, Nagycenk,Hungary
 
The Hungarian "ZBD" Team( Károly Zipernowsky, Ottó Bláthy, Miksa Déri ). They were the inventors of the first high efficiency, closed core shunt connection transformer. The three also invented the modern power distribution system: Instead of former series connection they connect transformers that supply the appliances in parallel to the main line.Blathy invented the AC Wattmeter, and they invented the essential Constant Voltage Generator.
In the autumn of 1884, Károly Zipernowsky, Ottó Bláthy and Miksa Déri (ZBD), three engineers associated with the Ganz factory, had determined that open-core devices were impracticable, as they were incapable of reliably regulating voltage.[11] In their joint 1885 patent applications for novel transformers (later called ZBD transformers), they described two designs with closed magnetic circuits where copper windings were either a) wound around iron wire ring core or b) surrounded by iron wire core.[10] In both designs, the magnetic flux linking the primary and secondary windings traveled almost entirely within the confines of the iron core, with no intentional path through air (see Toroidal cores below). The new transformers were 3.4 times more efficient than the open-core bipolar devices of Gaulard and Gibbs.[12]
The Ganz factory in 1884 shipped the world's first five high-efficiency AC transformers.[13] This first unit had been manufactured to the following specifications: 1,400 W, 40 Hz, 120:72 V, 11.6:19.4 A, ratio 1.67:1, one-phase, shell form.[13]
The ZBD patents included two other major interrelated innovations: one concerning the use of parallel connected, instead of series connected, utilization loads, the other concerning the ability to have high turns ratio transformers such that the supply network voltage could be much higher (initially 1,400 to 2,000 V) than the voltage of utilization loads (100 V initially preferred).[14][15] When employed in parallel connected electric distribution systems, closed-core transformers finally made it technically and economically feasible to provide electric power for lighting in homes, businesses and public spaces.[16][17]
The other essential milestone was the introduction of 'voltage source, voltage intensive' (VSVI) systems'[18] by the invention of constant voltage generators in 1885.[19] Ottó Bláthy also invented the first AC electricity meter.[20][21][22][23]
The AC power systems was developed and adopted rapidly after 1886 due to its ability to distribute electricity efficiently over long distances, overcoming the limitations of the direct current system. In 1886, the ZBD engineers designed, and the Ganz factory supplied electrical equipment for, the world's first power stationthat used AC generators to power a parallel connected common electrical network, the steam-powered Rome-Cerchi power plant.[24] The reliability of the AC technology received impetus after the Ganz Works electrified a large European metropolis: Rome in 1886.[24]

The city lights of Prince George, British Columbia viewed in a motion blurred exposure. The AC blinking causes the lines to be dotted rather than continuous.

Westinghouse Early AC System 1887
(US patent 373035)
In the UK Sebastian de Ferranti, who had been developing AC generators and transformers in London since 1882, redesigned the AC system at the Grosvenor Gallery power station in 1886 for the London Electric Supply Corporation (LESCo) including alternators of his own design and transformer designs similar to Gaulard and Gibbs.[25] In 1890 he designed their power station at Deptford[26] and converted the Grosvenor Gallery station across the Thames into an electrical substation, showing the way to integrate older plants into a universal AC supply system.[27]
In the US William Stanley, Jr. designed one of the first practical devices to transfer AC power efficiently between isolated circuits. Using pairs of coils wound on a common iron core, his design, called an induction coil, was an early (1885)transformer. Stanley also worked on engineering and adapting European designs such as the Gaulard and Gibbs transformer for US entrepreneur George Westinghouse who started building AC systems in 1886. The spread of Westinghouse and other AC systems triggered a push back in late 1887 by Thomas Edison (a proponent of direct current) who attempted to discredit alternating current as too dangerous in a public campaign called the "War of Currents".
In 1888 alternating current systems gained further viability with introduction of a functional AC motor, something these systems had lacked up till then. The design, an induction motor, was independently invented by Galileo Ferraris and Nikola Tesla (with Tesla's design being licensed by Westinghouse in the US). This design was further developed into the modern practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown.[28]
The Ames Hydroelectric Generating Plant (spring of 1891) and the original Niagara Falls Adams Power Plant (August 25, 1895) were among the first hydroelectric AC-power plants. The first commercial power plant in the United States using three-phase alternating current was the hydroelectric Mill Creek No. 1 Hydroelectric Plant near Redlands, California, in 1893 designed by Almirian Decker. Decker's design incorporated 10,000-volt three-phase transmission and established the standards for the complete system of generation, transmission and motors used today.
The Jaruga Hydroelectric Power Plant in Croatia was set in operation on 28 August 1895. The two generators (42 Hz, 550 kW each) and the transformers were produced and installed by the Hungarian company Ganz. The transmission line from the power plant to the City of Šibenik was 11.5 kilometers (7.1 mi) long on wooden towers, and the municipal distribution grid 3000 V/110 V included six transforming stations.
Alternating current circuit theory developed rapidly in the latter part of the 19th and early 20th century. Notable contributors to the theoretical basis of alternating current calculations include Charles Steinmetz, Oliver Heaviside, and many others.[29][30] Calculations in unbalanced three-phase systems were simplified by thesymmetrical components methods discussed by Charles Legeyt Fortescue in 1918.


AC POWER



AC power supply frequencies

Further information: Mains power around the world
The frequency of the electrical system varies by country and sometimes within a country; most electric power is generated at either 50 or 60 hertz. Some countries have a mixture of 50 Hz and 60 Hz supplies, notably electricity power transmission in Japan.
A low frequency eases the design of electric motors, particularly for hoisting, crushing and rolling applications, and commutator-type traction motors for applications such as railways. However, low frequency also causes noticeable flicker in arc lamps and incandescent light bulbs. The use of lower frequencies also provided the advantage of lower impedance losses, which are proportional to frequency. The original Niagara Falls generators were built to produce 25 Hz power, as a compromise between low frequency for traction and heavy induction motors, while still allowing incandescent lighting to operate (although with noticeable flicker). Most of the 25 Hz residential and commercial customers for Niagara Falls power were converted to 60 Hz by the late 1950s, although some[which?] 25 Hz industrial customers still existed as of the start of the 21st century. 16.7 Hz power (formerly 16 2/3 Hz) is still used in some European rail systems, such as in Austria, Germany,Norway, Sweden and Switzerland.
Off-shore, military, textile industry, marine, aircraft, and spacecraft applications sometimes use 400 Hz, for benefits of reduced weight of apparatus or higher motor speeds.
Computer mainframe systems are often powered by 415 Hz, using customer-supplied 35 or 70 KVA motor-generator sets.[3] Smaller mainframes may have an internal 415 Hz M-G set. In any case, the input to the M-G set is the local customary voltage and frequency, variously 200 (Japan), 208, 240 (North America), 380, 400 or 415 (Europe) volts, and variously 50 or 60 Hz.
Effects at high frequencies
Main article: Skin effect
A direct current flows uniformly throughout the cross-section of a uniform wire. An alternating current of any frequency is forced away from the wire's center, toward its outer surface. This is because the acceleration of an electric charge in an alternating current produces waves of electromagnetic radiation that cancel the propagation of electricity toward the center of materials with high conductivity. This phenomenon is called skin effect.
At very high frequencies the current no longer flows in the wire, but effectively flows on the surface of the wire, within a thickness of a few skin depths. The skin depth is the thickness at which the current density is reduced by 63%. Even at relatively low frequencies used for power transmission (50–60 Hz), non-uniform distribution of current still occurs in sufficiently thick conductors. For example, the skin depth of a copper conductor is approximately 8.57 mm at 60 Hz, so high current conductors are usually hollow to reduce their mass and cost.
Since the current tends to flow in the periphery of conductors, the effective cross-section of the conductor is reduced. This increases the effective AC resistance of the conductor, since resistance is inversely proportional to the cross-sectional area. The AC resistance often is many times higher than the DC resistance, causing a much higher energy loss due to ohmic heating (also called I2R loss).
Techniques for reducing AC resistance
For low to medium frequencies, conductors can be divided into stranded wires, each insulated from one other, and the relative positions of individual strands specially arranged within the conductor bundle. Wire constructed using this technique is called Litz wire. This measure helps to partially mitigate skin effect by forcing more equal current throughout the total cross section of the stranded conductors. Litz wire is used for making high-Q inductors, reducing losses in flexible conductors carrying very high currents at lower frequencies, and in the windings of devices carrying higher radio frequency current (up to hundreds of kilohertz), such as switch-mode power supplies and radio frequency transformers.
Techniques for reducing radiation loss
As written above, an alternating current is made of electric charge under periodic acceleration, which causes radiation of electromagnetic waves. Energy that is radiated is lost. Depending on the frequency, different techniques are used to minimize the loss due to radiation.
Twisted pairs
At frequencies up to about 1 GHz, pairs of wires are twisted together in a cable, forming a twisted pair. This reduces losses from electromagnetic radiation andinductive coupling. A twisted pair must be used with a balanced signalling system, so that the two wires carry equal but opposite currents. Each wire in a twisted pair radiates a signal, but it is effectively cancelled by radiation from the other wire, resulting in almost no radiation loss.
Coaxial cables
Coaxial cables are commonly used at audio frequencies and above for convenience. A coaxial cable has a conductive wire inside a conductive tube, separated by adielectric layer. The current flowing on the inner conductor is equal and opposite to the current flowing on the inner surface of the tube. The electromagnetic field is thus completely contained within the tube, and (ideally) no energy is lost to radiation or coupling outside the tube. Coaxial cables have acceptably small losses for frequencies up to about 5 GHz. For microwave frequencies greater than 5 GHz, the losses (due mainly to the electrical resistance of the central conductor) become too large, making waveguides a more efficient medium for transmitting energy. Coaxial cables with an air rather than solid dielectric are preferred as they transmit power with lower loss.
Waveguides
Waveguides are similar to coax cables, as both consist of tubes, with the biggest difference being that the waveguide has no inner conductor. Waveguides can have any arbitrary cross section, but rectangular cross sections are the most common. Because waveguides do not have an inner conductor to carry a return current, waveguides cannot deliver energy by means of an electric current, but rather by means of a guided electromagnetic field. Although surface currents do flow on the inner walls of the waveguides, those surface currents do not carry power. Power is carried by the guided electromagnetic fields. The surface currents are set up by the guided electromagnetic fields and have the effect of keeping the fields inside the waveguide and preventing leakage of the fields to the space outside the waveguide.
Waveguides have dimensions comparable to the wavelength of the alternating current to be transmitted, so they are only feasible at microwave frequencies. In addition to this mechanical feasibility, electrical resistance of the non-ideal metals forming the walls of the waveguide cause dissipation of power (surface currents flowing on lossy conductors dissipate power). At higher frequencies, the power lost to this dissipation becomes unacceptably large.
Fiber optics
At frequencies greater than 200 GHz, waveguide dimensions become impractically small, and the ohmic losses in the waveguide walls become large. Instead, fiber optics, which are a form of dielectric waveguides, can be used. For such frequencies, the concepts of voltages and currents are no longer used.
Mathematics of AC voltageshttp://upload.wikimedia.org/wikipedia/commons/thumb/4/4a/Sine_wave_2.svg/220px-Sine_wave_2.svg.png

History
The first alternator to produce alternating current was a dynamo electric generator based on Michael Faraday's principles constructed by the French instrument maker Hippolyte Pixii in 1832.[4] Pixii later added a commutator to his device to produce the (then) more commonly used direct current. The earliest recorded practical application of alternating current is by Guillaume Duchenne, inventor and developer of electrotherapy. In 1855, he announced that AC was superior to direct currentfor electrotherapeutic triggering of muscle contractions.[5]
Alternating current technology had first developed in Europe due to the work of Guillaume Duchenne (1850s), The Hungarian Ganz Works (1870s), Sebastian Ziani de Ferranti (1880s), Lucien Gaulard, and Galileo Ferraris.
In 1876, Russian engineer Pavel Yablochkov invented a lighting system based on a set of induction coils where the primary windings were connected to a source of AC. The secondary windings could be connected to several 'electric candles' (arc lamps) of his own design.[6][7] The coils Yablochkov employed functioned essentially as transformers.[6]
In 1878, the Ganz factory, Budapest, Hungary, began manufacturing equipment for electric lighting and, by 1883, had installed over fifty systems in Austria-Hungary. Their AC systems used arc and incandescent lamps, generators, and other equipment.[8]
A power transformer developed by Lucien Gaulard and John Dixon Gibbs was demonstrated in London in 1881, and attracted the interest of Westinghouse. They also exhibited the invention in Turin in 1884.