Monday, December 17, 2012

"Do's and Don'ts" when considering an FPGA to structured ASIC design methodology


More and more engineers are considering structured ASICs when they are designing advanced systems, because these components offer low unit cost, low power, and high performance along with fast turn-around.

In a structured ASIC, the functional resources – such as logic, memory, I/O buffers – are embedded in a pre-engineered and pre-verified base layer. The device is then customized with the top few metal layers, requiring far less engineering effort to create a low cost ASIC (Fig 1). This reduces not only the time and development costs, but also the risk of design errors, since the ASIC vendor only needs to generate metallization layers. With 90-nm process technologies, structured ASICs offer the density and performance required to meet a wide range of advanced applications.


1. Standard cell ASIC (top) versus structured ASIC (bottom).
However, there is still risk involved when it comes to developing a structured ASIC. Errors in the logic design can still exist, so one way to avoid time-consuming and costly silicon re-spins is to use FPGA prototyping and to then convert the design from an FPGA to some form of ASIC. FPGA prototyping is more successful for structured ASICs compared to standard cell ASICs when the structured ASIC mirrors the resources available on the FPGA. The closer the match between the I/O and memory of the FPGA and the structured ASIC, the lower the risk when the design is converted to an ASIC.

Some "Do's and Don'ts" to take into account when considering a structured ASIC design methodology are as follows:
Do
Establish a design methodology you can use for a wide range of applications. Make sure your design teams are trained on the tools and the FPGA and ASIC architectures to create the best possible design.

Use a software development environment that reduces the risk of design problems, such as functional logic errors. Logic verification and simulation, along with prototyping the design in an FPGA, is a proven method to ensure the design will work in the system.

Prototype your design with an FPGA using the FPGA features that give you the best performance and functionality. Also, generate the prototype with the IP you need for the application, which may require a soft processor, hard multipliers, and memory. In addition, use high-speed LVDS or other I/O to ensure you are building in the signal integrity needed to have a reliable system.

Test your design in-system as much as possible to verify the design works according to requirements. Make sure the system is tested with the FPGA prototype across the entire voltage and temperature range that the system will experience. That will reduce the risk that when the design is converted to an ASIC it will only operate over a limited temperature range and at nominal voltage.

Design the system to use either an FPGA or the structured ASIC. This allows you two major advantages. First, you can go into production with the FPGA and then change to the ASIC once it is available. That provides the advantage of getting to market faster and promotes a market position. Secondly, if there is an unexpected increase in the demand for ASICs and supplies are insufficient, some systems with an FPGA can be manufactured, thus keeping the production lines running. Finally, using the FPGA at the system's end-of-life will save you from having to order more ASIC devices that are needed to fulfill manufacturing requirements.

An example is the Altera HardCopy II structured ASIC. Generate your prototype with a Stratix' II FPGA, then go into production with the Stratix II FPGA while the Altera HardCopy Design Center migrates the design to a pin-compatible HardCopy II device. Once the HardCopy II device is approved and production units are available, the system can be produced using the lower cost HardCopy II device.
The combination of Altera Stratix II FPGA and HardCopy II structured ASICs also gives you unique manufacturing flexibility, since you can use either in production. For example, you can use the HardCopy device for low cost, but if you have a sudden increase in demand and need more devices immediately, you can use off-the-shelf Stratix II FPGAs as a substitute. You can also go back to using Stratix II FPGAs exclusively if you need to update the design to fix an error or make a change for a specific customer.

Don't

Use an FPGA to prototype only logic and low-level I/O (such as LVTTL or LVCMOS). That will limit your design to low-end gate arrays that won't provide the performance edge needed. Too often, only the logic is prototyped in the FPGA, leading to a misconception of how well the design really works in the system. Many designs also require high-speed memory interfaces, and the best design practice is prototyping to ensure the interface performs as required, particularly across voltage and temperature variations.

Choose an ASIC methodology based only on unit cost. That may save some Bill-of-Material costs but make the system uncompetitive. Include factors such as realistic development time and costs along with total engineering effort. In the long run, an FPGA along with a structured ASIC can provide lower development costs and faster development turn-around time.

Consider only standard cell ASIC technology for ASSP designs. Sometimes, structured ASIC or even FPGAs are right for the annual volumes and the need for fast time to market.
Choose the structured ASIC before you look at the market needs for the design. Trying to shoehorn a design into a structured ASIC that is too small or feature limited, results in a system that is DOA in the market.

Consider only single-chip solutions. Sometimes the best way to architect a system can be using two devices rather than one large ASIC. Partitioning the design can reduce overall development time and simplify the design process. You can also reduce the risk of having to re-spin a large ASIC design.
Author : Rob Schreck, Aletra
Source :.design-reuse.com

Monday, December 10, 2012

Embedded Application and Product Engineering using ARM Processors


Reduced Instruction Set Computing [RISC] is a processor design that is analogous to high performance and high energy efficiency. One of the forerunners in production and supply of RISC Embedded microprocessors is ARM Holdings. ARM Holdings’ catalogs of processors are characterized by strong performance and high energy efficiency; a market-decider when it comes to digital products. Delivering high performance at low costs for the current market of advanced digital applications is now a reality because of the advancements in Embedded product engineering using ARM processors. Expert ARM architects are constantly in research and development to further improve [on the already advanced] ARM architecture, that has now become a staple architecture for embedded product design and engineering. ARM Holdings is also the world’s leading semiconductor Intellectual Property [IP] supplier, and is at the core of development of digital electronic products.

According to current industry experts, an excess of two million ARM-based processors are being used in the production of various kinds of machinery and equipment. The ARM Community [of developers worldwide] develops top-notch processors for the benefit of product design companies and designers around the globe. There is an endless list of ARM developers [with rich experience and a high industry reputation] for Embedded Product engineering with ARM processors, at highly competitive pricing. Development of advanced ARM processors, implementation of DSP algorithms, exception and interrupt handling, cache technology and memory management are some of the tasks that ARM developers manage for companies.
Embitel is a member of the ARM Connected Community and Partner Network, which is a global network of companies aligned to provide complete solutions, from design to manufacture, for embedded application and product engineering for products based on the ARM architecture. Embitel’s partnership with ARM includes digital multi-channel solutions for E-Commerce and Embedded technology development.

Wednesday, November 7, 2012

Webinar on Choosing Right Technology to Build HMI


Human Machine Interface (HMI) Systems provide the medium by which a user operates an automation process, system or a machine. The effectiveness of the HMI depends upon technical design process that incorporates all technical, commercial and ergonomic application requirements. HMI is a prominent tool for system automation which is used to configure, monitor and control the system. In this webinar we cover choosing right technologies for building HMI of automation components such as sensors, diagnostic equipment etc.

Key aspects of the webinar:
  • HMI overview with respect to system automation
  • Key factors for selecting different technologies for HMI development
  • Comparison of commonly used frameworks in HMI development
About Speakers: 
Jitendra Kumar Tripathi, Technical Architect
Technical expert in different HMI technologies. Jitendra worked on various technologies such as Java, C++, Plugin Development, Qt framework, Microsoft technologies etc. He has built several HMIs for diagnostic equipment.
Vidya Sagar, Competency Head
12+ years of technical expertise in development of embedded systems. Vidya Sagar worked or various aspects of Embedded systems including building HMIs. He took part in building sensor devices and HMIs.
Webinar: Choosing right technology to build HMI
Duration: 45 Minutes + Q&A 
Date: Wednesday, November 21, 2012
Time:  3 PM CET/2 PM BST/9 AM EST/7:30 PM IST
For More Information about Webinar @ https://www2.gotomeeting.com/register/496403930
About Embitel :
Embitel offers services in Embedded system development namely Turnkey systems, Board support packages, Device drivers, Various OS platforms – Driver development, Porting, Cross Platform porting, Feature development / Enhancement, Software sustenance / Maintenance.
Embitel has created a niche for itself in Industrial automation domain. We have focused and specialized product development services in the area of Industrial Sensors, Motion controllers, protection relays, Integrated PLCs and HMI panels. We also execute projects in real time product development using variety of operating systems such as Embedded Linux, WinCE, VCRT, QNX etc.
For more information and support, please visit http://www.embitel.com/embedded-services/automation-services/

Thursday, October 18, 2012

FPGA's vs. ASIC's


Deciding between ASICs and FPGAs requires designers to answer tough questions concerning costs, tool availability and effectiveness, as well as how best to present the information to management to guarantee support throughout the design process.
The first step is to make a block diagram of what you want to integrate. Sometimes it helps to get some help from an experienced field applications engineer. Remember that time is money. Your next move is to come up with some idea of production volume. Next, make a list of design objectives in order of importance. These could include cost (including nonrecurring engineering charges), die size, time-to-market, tools, performance and intellectual property requirements. You should also take into account your own design skills, what you have time to do and what you should farm out. Remember that it must make sense financially or you are doomed from the start.
Time-to-market is often at the top of the list. Some large ASICs can take a year or more to design. A good way to shorten development time is to make prototypes using FPGAs and then switch to an ASIC. But the most common mistake that designers make when they decide to build an ASIC is that they never formally pitch their idea to management. Then, after working on it for a week, the project is shot down for time-to-market or cost reasons. Designers should never overlook the important step of making their case to their managers.
Before starting on an ASIC, ask yourself or your management team if it is wise to spend $250,000 or more on NRE charges. If the answer is yes and you get the green light, then go. If the answer is no, then you'll need to gather more information before taking the ASIC route. Understand that most bean counters do not see any value in handing someone $250,000 for a one-time charge. They prefer to add cost to the production.
Say your project has a NRE of $300,000, a volume of 5,000, and it replaces circuitry that costs $80. The final ASIC cost is $40. You do some math and determine the break-even point is three years. If you amortize the same design over five years, this could save your company $400,000 even after NRE has been absorbed.
Another option is to do a "rapid ASIC" using preformed ASIC blocks, which saves time and lowers NRE costs. It could also make sense to convert an FPGA to ASIC directly, which lowers NRE a small amount from the rapid type.
Now let's say your company will not fund an ASIC effort. That means it's time to consider FPGAs. First, be aware that while the tools are free on the Web for the smaller FPGAs, you'll have to pay for a license file for the ones with high gate counts. The good news is that there are no NRE charges.
Modern FPGAs are packed with features that were not previously available. Today's FPGAs usually come with phase-locked loops), low-voltage differential signal, clock data recovery, more internal routing, high speed (most tools measure timing in picoseconds), hardware multipliers for DSPs, memory, programmable I/O, IP cores and microprocessor cores. You can integrate all your digital functions into one part and really have a system on a chip. When you look at all these features, it can be tough to argue for an ASIC.
Moreover, FPGA can be reprogrammed in a snap while an ASIC can take $50,000 and six weeks to make the same changes. FPGA costs start from a couple of dollars to several hundred or more depending on the features listed above.
So before you get moving, make sure to enlist some help, get the managers to support you, come up with a meaningful cost estimate, choose the right weapon -- be it ASIC or FPGA -- and then move into production.

Author : Jeff Kriegbaum
Source : http://www.design-reuse.com/articles/9010/fpga-s-vs-asic-s.html







Monday, October 8, 2012

Is DDR4 a bridge too far?


We’ve gone through two decades where the PC market made the rules for technology. The industry faces a question now: Can a new technology go mainstream without the PC?

By now, you’ve certainly read the news from Cadence on their DDR4 IP for TSMC 28nm. They are claiming a PHY implementation that exceeds the data rates specified for DDR-2400, which means things are blazing fast. What’s not talked about much is how the point-to-point interconnect needed for large memory spaces is going to be handled. 

Barring some earth-shattering announcement at IDF, Intel is way far way from DDR4. The nearest thing on their roadmap is Haswell-EX, a server platform for 2014. (Writing this when IDF is just getting underway is tempting fate, kind of like washing my car and then having it immediately rain.) AMD has been massively silent on the subject of DDR4 fitting into their processor roadmap. 

Meanwhile, both Samsung and Micron are ramping up 30nm production of DDR4, and Samsung is publically urging Intel to get moving. Both memory suppliers are slightly ahead of the curve, since the DDR4 spec isn’t official just yet. However, JEDEC has scheduled the promised DDR4 workshop for October 30, something they said would approximately coincide with the formal release of the specification. (In other words, it’s ready.)








We also have to factor in that LPDDR3 just hit the ground as a released specification this May, and memory chips implementing it won’t reach the pricing sweet spot for another year. Most phone manufacturers are still using LPDDR2 for that reason. (Again, iPhone 5 announcement this week, rain on my post forecasted.) Tablet types are just starting to pick up LPDDR3, amid talk the first implementations already need more bandwidth.

So, why the push for DDR4, especially in TSMC 28nm? DDR4 is obviously the answer to much higher memory bandwidth for cloud computing and the like. I’m sure there are others out there, but the was easy to find.

Interest in DDR4 has to be coming from somewhere in the ARM server camp, otherwise Cadence and TSMC wouldn’t be spending time on it. In spite of the power advances, DDR4 is no where near low-power enough to show up in a phone, and there’s no sign of a LPDDR4 specification yet. ARM 64-bit server implementations are just getting rolling, and Applied Micro’s X-Gene has sampled – with DDR3.

The volume driver for DDR4 – if it’s not PCs – is in question. The natural progression of speed that the PC markets have pushed for looks like it is about to run smack into the economics of affordable implementations, and that in turn could make life for the memory manufacturers interesting. (In a related side note,Elpida’s bondholders have come in saying the Micron bid is way too low.) Or, Intel and AMD could jump in and force the issue, betting on adoption farther down their PC supply chains.

DDR4 and IP supporting it in ARM server space could prove to be a turning point for technology investment, an inflection point in the way things have been done and a change from the PC driving. Or, it could end up being a bridge too far, but paving the way for another specification suited for mobile devices.

What are your thoughts on the outlook for DDR4, LPDDR3, an ARM server market, and the overalldynamics of PCs, servers, tablets and phones versus memory technology?

Author :Don Dingee
Source : http://www.semiwiki.com

Monday, September 3, 2012

The Unknown in Your Design Can be Dangerous


The System Verilog standard defines an X as an “unknown” value which is used to represent when simulation cannot definitely resolve a signal to a “1”, a “0”, or a “Z”. Synthesis, on the other hand, defines an X as a “don’t care”, enabling greater flexibility and optimization. Unfortunately, Verilog RTL simulation semantics often mask propagation of an unknown value by converting the unknown to a known, while gate-level simulations show additional Xs that will not exist in real hardware. The result is that bugs get masked in RTL simulation, and while they show up at the gate level, time consuming iterations between simulation and synthesis are required to debug and resolve them. Resolving differences between gate and RTL simulation results is painful because synthesized logic is less familiar to the user, and Xs make correlation between the two harder. Unwarranted X-propagation thus proves costly, causes painful debug, and sometimes allows functional bugs to slip through to silicon.

Continued increases in SOC integration and the interaction of blocks in various states of power management are exacerbating the X problem. In simulation, the X value is assigned to all memory elements by default. While hardware resets can be used to initialize registers to known values, resetting every flop or latch is not practical because of routing overhead. For synchronous resets, synthesis tools typically club these with data-path signals, thereby losing the distinction between X-free logic and X-prone logic. This in turn causes unwarranted X-propagation during the reset simulation phase. State-of-the-art low power designs have additional sources of Xs with the additional complexity that they manifest dynamically rather than only during chip power up.

Lisa Piper, from Real Intent, presented on this topic at DVCon 2012 and she described a flow in her paper that mitigates X-issues. The flow is reproduced here.

She describes a solution to the X-propagation problem that is part technology and part methodology. The flow brings together structural analysis, formal analysis, and simulation in a way that addresses all the problems and can be scaled. In the figure above, it shows the use model for the design engineer and the verification engineer. The solution is static analysis centered for the design engineer and is primarily simulation-based for the verification engineer. Also, the designer centric flow is preventative in nature while the verification flow is intended to identify and debug issues.

Author : Graham Bell
Source : www.semiwiki.com

Monday, August 13, 2012

Industrial Ethernet—The basics

When you talk about office and home networking, typically you’re talking about Ethernet-based networks—computers, printers and other devices that contain Ethernet interfaces connected together via Ethernet hubs, switches and routers. In the industrial area the networking picture is more complex. Now, Ethernet is becoming a bigger part of that picture. This article is an introduction to the basics of Ethernet, with a bit of added detail on how it fits into the industrial networking picture.
Ethernet’s roots
Although Xerox’s Bob Metcalfe sketched the original Ethernet concept on a napkin in 1973, its inspiration came even earlier. ALOHAnet, a wireless data network, was created to connect together several widely separated computer systems on Hawaiian college campuses located on different islands. The challenge was to enable several independent data radio nodes to communicate on a peer-to-peer basis without interfering with each other. ALOHAnet’s solution was a version of the carrier sense multiple access with collision detection (CSMA/CD) concept. Metcalfe based his Ph.D. work on finding improvements to ALOHAnet, which led to his work on Ethernet.
Ethernet, which later became the basis for the IEEE 802.3 network standard, specifies physical and data link layers of network functionality. The physical layer specifies the types of electrical signals, signaling speeds, media and connector types and network topologies. The data link layer specifies how communications occurs over the media—using the CSMA/CD technique mentioned above—as well as the frame structure of messages transmitted and received.
Ethernet Physical Layer
In the early days Ethernet options were more limited than they are today. Two common options were 10Base2 and 10Base5 configurations. Both operated at 10 Mbps and used coaxial cable with nodes connected to the cable via Tee connectors, or through ‘attachment unit interfaces’ (AUI) in a multi-drop bus configuration. 10Base2 networks allowed segment lengths of up to 185 feet using RG 58 coaxial cable (also called Thin Ethernet). 10Base5 offered greater distances between nodes but the thick coaxial cable and ‘vampire tap’ connections were bulky and difficult to work with. Later, another solution in this speed category was 10Base-FL, which uses fiber optic media and provides distances greater than 2000 feet.
Another early 10 Mbps physical layer option—10Base-T—quickly gained popularity because it was easier to install and used inexpensive unshielded twisted pair (UTP) Category 3 cable. Nodes (typically computers with network interface cards, or NICs) were connected in a star topology to a hub, which in turn was connected to other network segments. Each computer had to be less than 100 feet from the hub. Standard RJ-45 connectors were used.
In the mid-1990s 100 Mbps Ethernet equipment became available, increasing the data transfer rate significantly. NICs that would automatically adjust to operate at 10 Mbps or 100 Mbps made migration to the faster standard simple. Today, virtually all computer network interface cards implement 100Base-TX. Category 5e UTP cable is the standard cable used with 100Base-TX and cable lengths are the same as for 10Base-T networks. Coaxial-based networks are increasingly being replaced with fiber optic media, especially for point-to-point links. For example, 100Base-FX uses two optical fibers and allows full duplex point-to-point communications up to 2000 feet. Gigabit Ethernet (1000 Mbps) options also are available using twisted pair and fiber optic media.
Data Link Layer
Ethernet’s data link layer defines its media access method. Half-duplex links, such as those connected in bus or star topologies (10/100Base-T, 10Base2, 10Base5, etc.), use carrier sense, multiple access with collision detection (CSMA/CD). This method allows multiple nodes to have equal access to the network, similar to early party-line telephone systems in which users listened for ongoing conversations and waited until the line was free before accessing the line. All nodes on an Ethernet network continuously monitor for transmissions on the media. If a node needs to transmit it waits until the network is idle, then begins transmission. While transmitting, each node monitors its own transmission and compares what it ‘hears’ with what it is trying to send. If two nodes begin transmitting at the same time, the signals will overlap, corrupting the originals. Both nodes will see a different signal to that which they are trying to send. This is recognized as a ‘collision’. If there is a collision, each node stops transmitting and only attempts to re-transmit after a preset delay, which is different for each node.
This method of media access makes it simple to add to or remove nodes from a network. Simply connect another node and it begins to listen and transmit when the network is available. However, as the number of nodes grows and the volume of traffic from each node increases, opportunities to gain access to the network decrease. As utilization increases, the number of collisions increases exponentially and the probability of getting access within a given length of time decreases dramatically. This characteristic makes Ethernet a probabilistic network, as opposed to a deterministic network, in which access time can be reliably predicted. (Master/slave and token passing network schemes are deterministic.)
Full-duplex point-to-point Ethernet links (10Base-FL, 100Base-FX, etc.) collisions are not an issue since only two nodes are present and separate send and receive channels are available. Another advantage is that data can be sent in both directions simultaneously, effectively doubling the data transfer rate.
The Ethernet Frame
The Ethernet data link layer also defines the format of data messages sent on a network. The data message format, or frame, contains several fields of information in addition to the data to be transferred across the network.
Obviously, at the heart of the message is the actual data that is to be sent. This is called the ‘data unit’. Ethernet data units can contain between 46 and 1500 eight-bit bytes of binary information. The actual length of the data unit is determined and included in the message as a field, to tell the receiver how to determine which part of the message is data. Each message must include source and destination addresses so that other nodes can determine where the message is coming from and going to.
These six-byte binary numbers are called MAC addresses. Every Ethernet node has a unique MAC address permanently stored in its hardware memory. The Ethernet frame also contains a four-byte ‘frame check sequence’ (FCS) field which is a binary number generated by the sending node that allows high reliability cyclic redundancy checking (CRC) error checking to be done by the receiving node.
Hubs and Switches
Ethernet hubs are simple physical layer devices used with 10/100Base-T(X) networks to repeat and split Ethernet signals. Nodes connect to ports on the hub as branches to create a physical star topology. Hubs receive data from the connected nodes, regenerate it and send it out on all other ports. By regenerating the data the maximum segment distance can be extended. All transmissions go to all the connected nodes, the same as on a bus network. Nodes respond to transmissions based on the destination address contained in the message frame. Hubs allow all wiring to connect to a central location making it easy to isolate problem nodes and make changes to the network.
Switches are similar to hubs except that they divide the network into segments. An internal table is maintained of the destination addresses of the nodes connected to the switch. When an Ethernet packet is received at one of the switch’s ports the destination address in the packet is read, a connection is made to the appropriate port and the packet is sent to that node. This isolates the message traffic from the other nodes, decreasing the utilization on the overall network. Ethernet switches can be managed or unmanaged. Unmanaged switches operate as described above. Managed switches allow advanced control of the network. They include software to configure the network and diagnostic ports to monitor network traffic.

Higher Level Network Functions
To facilitate reliable communications across multiple, and in some cases dissimilar networks, other higher-level protocols are used on top of Ethernet’s data link layer. The most common of these today, especially when connecting an Ethernet network to the Internet, is TCP/IP. IP, or Internet protocol, ensures packets are moved across the network based on their IP address. TCP, or transport control protocol makes sure data is delivered completely and error-free. Two or more Ethernet networks may be connected together via a router, a device that maintains a list of IP addresses on each network connected to it. The router monitors the IP addresses on packets received at its ports and routes them to the port connected to the appropriate network.

Ethernet and Industrial Systems
Ethernet’s simple and effective design has made it the most popular networking solution at the physical and data link levels, especially for general-purpose office networks. With high-speed options and a variety of media types to choose from Ethernet is efficient and flexible. Using inexpensive UTP cable and star topology, and CSMA/CD media access, Ethernet networks are easily designed and built. Nodes can be added or removed simply and troubleshooting is relatively easy to do. As Ethernet and related technologies have become prevalent in the general networking arena a large base of trained personnel has become available.
These factors, and the low cost of Ethernet hardware, have made Ethernet an attractive option for industrial networking applications. Also, the opportunity to use open protocols such as TCP/IP over Ethernet networks offers the possibility of a level of standardization and interoperability that has until now remained elusive in the industrial field.
However, the probabilistic nature of Ethernet is one characteristic that is a drawback for some industrial network applications. Historically, time critical networking applications have been handled using deterministic networks (using master/slave or token passing schemes). Utilization levels on industrial Ethernet networks must be carefully controlled as levels greater than 10% often result in inadequate performance. Still, as the overall cost/benefits of Ethernet have increased, industrial users have found ways to enhance Ethernet’s data transfer performance. One method is to segment networks using switches and routers to minimize unwanted network traffic and reduce utilization. Another is to use newer, higher level protocols that incorporate prioritization, synchronization and other techniques to ensure timely delivery of messages.
The result has been an ongoing shift toward the use of Ethernet for industrial control and automation applications. Ethernet is increasingly replacing proprietary communications at the plant floor level and in some cases moving downward into the cell and field levels.
Most major control system manufacturers now incorporate versions of Ethernet networks and higher-level Ethernet-related protocols into their product offerings. Often, several manufacturers and/or industry stakeholders have entered into cooperative efforts to develop Ethernet-related standards and products. Several other these now exist, though interoperability between them continues to be elusive.
Ethernet for Control Automation Technology (EtherCAT) is an open real-time Ethernet network developed by Beckhoff. It provides real-time performance, features twisted pair and fiber optic media, and supports various topologies. It is supported by the EtherCAT Technology Group, which has 168 member companies.
Ethernet Powerlink is a real-time Ethernet protocol that combines the CANopen concept with Ethernet technology. The Ethernet Powerlink Standardization Group (EPSG) is an open association of industry vendors, research institutes and end-users in the field of deterministic real-time Ethernet.
EtherNet/IP is an industrial networking standard that takes advantage of commercial off-the-shelf Ethernet communications chips and physical media. The IP stands for ‘industrial protocol’. ControlNet International (CI), the Industrial Ethernet Association (IEA) and the Open DeviceNet Vendor Association (ODVA) support it.
Modbus-TCP, supported by Schneider Automation, allows the well-proven Modbus protocol to be carried over standard Ethernet networks on TCP/IP.
PROFINET is Profibus’ Ethernet-based communication system, currently under development by Siemens and the Profibus User Organization (PNO).
The ongoing level of interest, activity and new product introductions of Ethernet-based equipment suggests industrial use of Ethernet will continue to grow for the foreseeable future.



Author : Mike Fahrion
Source : www.eetimes.com

Thursday, July 26, 2012

Electronics markets showing signs of recovery


Electronics markets bounced back strongly in 2010 from the 2008-2009 recession. The recovery stalled in 2011 as a series of natural and human-made disasters hit various parts of the world. Japan was hit by an earthquake and tsunami in March 2011. Thailand was affected by floods which disrupted HDD production and thus impacted PC production throughout the world. The European financial crisis led to economic weakness in most of the European Union (EU).

Recent government data on electronic production and orders shows most regions are beginning to recover from the 2011 slowdown. The chart below shows three-month-average change versus a year ago in local currency for electronics orders (U.S. and EU) and production (China and Japan). The data is through May 2012, except for the EU which ist hrough April. China continues to show the most robust growth, with double digit growth since the beginning of 2010. U.S. electronics orders experienced 12 months of year-to-year declines from March 2011 through January 2012, but turned positive in February 2012, reaching 6% growth in May. Japan electronics production change was negative for 16 months, with declines greater than 20% for six of those months. In May 2012, Japan electronics production turned positive at 2.3%. EU electronics orders remain weak, with April 2012 the twelfth consecutive month of year-to-year declines.



Unit shipments of PCs, mobile phones and LCD TVs have not yet reflected the above recovery. International Data Corporation (IDC) estimates PC unit shipments versus a year ago have been flat or in the low single-digit range for seven quarters through 2Q 2012. A bright spot has been media tablets, which have shown explosive growth since Apple released its iPad in 2Q 2010. Numerous competitors to the iPad have emerged from Samsung, Amazon and others. Media tablets in many cases are displacing PC sales. Combining unit shipments of PCs and media tablets results in a more realistic picture of this market segment. The combined data shows sturdy growth in the double digit range through 1Q 2012.


Mobile phone shipments were fairly sound with double-digit growth through the first three quarters of 2011. Shipments have weakened since, with 1Q 2012 down 1.5% from a year ago. Within the mobile phone market smart phones have shown vigorous growth, over 40% for the last three quarters. IDC estimated smart phones accounted for 36% of the total mobile phone market in 1Q 2012, double the percentage from two years earlier. Smart phones are a key driver for the semiconductor market due to the high semiconductor content. Smart phones also benefit mobile service providers through increased data usage and provide growth opportunities for app developers and accessory makers.

LCD TV unit shipments have also been weakening over the last two years, according to Display Search. 1Q 2012 LCD TV units were down 3% from a year ago compared to year-to-year growth averaging over 30% in 2010 and 7% in 2011. 3D LCD TVs are emerging as a meaningful segment of the market, accounting for 14% of LCD TV shipments in 1Q 2012 versus just 4% in 1Q 2011.

Despite the overall weakness in electronics and semiconductor markets, some signs are pointing towards a resumption of healthy growth. Two of the key drivers of the new recovery, media tablets and smart phones, have just emerged as significant markets in the last few years. 3D TVs may drive growth in an overall flat TV market. Economists generally expect the overall world economy will show higher growth in 2013 than in 2012. An improving economy and the new electronics market drivers should lead to relatively strong electronics and semiconductor markets in 2013.

Author: Bill Jewel
Source :http://www.semiwiki.com/

Thursday, July 19, 2012

Advanced switches customize, optimize automotive HMIs

Author: Owen Camden, Source: www.eetimes.com

User interface is an extremely important aspect of any vehicle “experience.” Obviously, electromechanical components, such as switches, in automotive applications must provide high reliability and long operating life. But these switches must also add to the look (design) and feel (functionality) of a vehicle’s keyfob, center console, steering wheel and other in-cabin interfaces.

Known in the industry as “haptics,” the touch, feel, and sound of a switch’s actuation are paramount to a user’s experience and impression of a particular vehicle. Automotive design engineers want user interfaces to respond and provide feedback in specific ways that are unique to their vehicle.

Essentially, user feedback is part of the identity of a vehicle and its manufacturer. Haptic features are often customized for specific vehicles and manufacturers, and are often accomplished through advanced switch configurations. These switches must combine ruggedness with design flexibility to meet automotive requirements

Sealing




A commonality of every harsh environment application is the need for ruggedized components that can withstand the brutal environmental conditions. Switches sealed to IP40, IP65, IP67 and IP68 specifications that are resistant to contamination by, dust, dirt, salt, and water are necessary in automotive applications.



Combating environmental elements is often accomplished with switch designs featuring an internal seal to protect the switching mechanism, as well as an external panel seal designed to keep liquids from entering the panel or enclosure. Silicon rubber caps are also employed to prevent the ingress of fluids that could adversely affect the function of standard switches. Additionally, the materials used in these switches, such as PBT (polybutylene terephtalate), must be robust under exposure to chemicals, water, and other potential contaminants.

Rugged switch designs for automotive vehicles also need to be resistant to extreme temperatures, vibration, and shock. Depending on the specific application within or outside the vehicle, advanced snap-acting, tactile, pushbutton, and toggle switches are available with IP67/IP68 ratings that provide high levels of reliability and predictability.

Reliability and predictable behavior is directly linked to the compatibility of the switch based on the mechanical environment of the unit and the final OEM specification. Switch manufacturers must simulate the potential environmental conditions a switch will encounter over its operating life to ensure their design achieves the specified robustness.

Haptics

As the gateway to a user interface, a switch configuration must have a desirable appearance, feel, and sound. Switch manufactures must work closely with automotive makers to optimize aesthetics, ergonomics, and performance.

For typical switching functions in automotive applications, such as front-panels and dashboards, robust pushbutton switches are often utilized. Sealed up to IP68-ratings, pushbutton switches can be designed to provide excellent tactile feedback with specific travel and actuation forces. Pushbutton switches also provide a long operating life of more than one million cycles, making them to be well-suited for use in automotive vehicles.

Ultra-miniature tactile switches are ideal for keyfobs, as they combine a high-operating life with long-term reliability and low current compatibility, while a snap-switch would most likely be used in a vehicle’s fob reader. Both switches can be customized to provide precise feedback.

The audible response of a switch is another important way for vehicle manufacturers to differentiate their vehicles. This feature can be customized to deliver a specified sound or click when actuated. Automotive OEMs are focusing more on these sound responses and consider the acoustic response as part of their branding. The audible response is highly dependent on the switch design, and vehicle manufacturers need to get the same sound response for all units, regardless of the advanced switch configuration.

In order to provide solutions that deliver repeatable audible responses, switch manufacturers have developed established platforms that allow designs to carry over to different applications. These switches provide high-reliability and consistent haptics in an “automotive proven” series, while also achieving drastically reduced development cost and time.

Implementing new technologies
Consumers are by now familiar with touch screens in their smart phones, vehicle infotainment systems, navigation/GPS systems, and even computer monitors. Similarly, touch sensing controls, including tactile screens, can now be implemented into almost any electronic device, including center console interfaces. The “clicker” function (see below) can be enabled with any touch-sensitive surface, and provides uniform haptics over any surface. The simple mechanical system integrates low-profile tactile switches with the actuating superstructure to provide a clickable touch sensing system.


Based on a structure with supporting points, the actuator collects and transmits the force from the touching surface to the switch with a minimum of distortion or power consumption. Pressure on any surface will apply a force on the supporting points at the edge, which will then in turn send back the force to middle arms and then to the appropriate tact switch. The technology is more efficient than “hinged” solutions by providing a smooth and uniform click. Multiple configurations, structures and profiles can be developed depending on the application and room available for integration, from 2.5 to 10 mm.

In addition, the clicker technology can combined with multiple key areas based on the same actuating surface. The keys/buttons are managed via touch sensing in that each area is used for pre-selection of the function. Selecting the function is managed by actuation on the whole surface. The main benefit is that instead of managing separated keys/buttons, the unit can be managed by a single flush surface with haptics, reducing integration issues and simplifying some features like key alignment, backlighting, and tactile difference between keys.

Assembly

In terms of assembly and maintenance, electromechanical components for automotive applications often feature quick connections that support modular installation so that they can be plugged into the panel and locked firmly into place, simplifying both assembly and maintenance. This arrangement also lets the OEM store individual switch elements separately and configure them during final assembly to meet application-specific needs, while saving storage space and costs.

One such example of this is seen with illumination, as automotive vehicles contain a large number of visual indicators for informational and safety purposes, signifying the current state or function of vehicle’s features. LED indicators can be mounted independently of the switch and interfaced with an IC that controls it for slow or fast blinking, or to produce various colors that indicate the status.

Adding illumination at the switch level significantly reduces materials costs, as some switch designs with optional illumination allow users to order the base switch with or without the cap, providing the ability to order one base switch together with multiple color caps, or snap-on caps, during the installation process to suit each specific application. This modular solution not only allows OEMs to have fewer part numbers to inventory, thus simplifying materials and assembly processes, but it also gives engineers greater flexibility with circuit and panel designs.

Manufacturers have also developed pushbutton switches that offer panel-mounting options to help simplify assembly. Rear panel-mounting switches, such as C&K’s rugged pushbutton switches, are ideal for applications where multiple switches are going to be mounted to a PCB. Customers can mount them to the PCB first, then the entire assembly can be installed into the armrest or panel. This allows for a much easier installation compared to traditional front mounted switches that require the mounting hardware to be tightened from behind the panel, which is sometimes hard to access; as well as having to run all the leads to the mating connectors on the PCB. Rear mounted switches allow customers to install the hardware from the top and still achieve the same look while simplifying the installation process.

Custom electromechanical solutions
OEMs are approaching electromechanical component manufacturers for more than just switches. By utilizing a switch manufacturer for designs beyond the switch itself, increased flexibility in overall design can be realized. Switch dimensions are becoming more critical, making it imperative to work closely with customers to discern all details. Today, it is less about the switch being mounted to a PC board or adding wire leads or a connector to the switch, and more about defining the issues that need to be solved. Because switch manufacturers are now dealing with the entire module, they are spending an increasing amount of time with customers to determine how the module is being impacted in the application to assess potential challenges that were not previously considered.

By working closely with customers in all phases of the design process, switch manufacturers can identify materials that interface with the operator, and those in the actual contact mechanism can be re-evaluated and altered to conform to performance, reliability, lifespan, and robustness standards. For example, some manufacturers are now offering switch packages with multi-switching capability and high over-travel performance, with various core-switching technologies such as opposing tactile switches or a dome array on a PC board. Such packages often include additional PC board-mounted integrated electronics, custom circuitry, and industry standard connectors for the complete package.

Still other solutions optimized for automobiles, such as interior headliners, feature not only customized switches with insert molded housing and custom circuitry and termination, but also paint and laser etched switch button graphics and decoration capabilities, as well as backlighting for nighttime use. Today, switch manufacturers are doing even these customized graphics and decorations in-house.

Conclusion
Although the electromechanical components are some of the last devices designed or specified into a center console, dashboard, or steering wheel, switches are one of the most important components in a system—and one of the first components a vehicle operator contacts. Each automotive manufacturer has different performance requirements, external presentations, internal spacing, and footprint restrictions. As such, each vehicle requires different packaging and orientation of switches to achieve the functional and performance objectives. Customized haptic options (actuation travel, force, and audible sound), ergonomic options (illumination, decoration, appearance), sealing options, circuitry configurations, housing styles, mounting styles (including threaded or snap-in mounting), and termination options are integral to meeting these application requirements.

Combining innovative electromechanical designs with complete switch assemblies often affords customers greater flexibility and customization options that reduce assembly and manufacturing costs while improving performance and reliability. Working closely with the switch manufacturer to solve design problems and implement customized performance requirements can yield faster time-to-market by streamlining the prototyping and production ramp-up stages, thereby bringing the finished product to market more quickly.

Thursday, July 12, 2012

Integrating IDE and RTOS for ARM-based development


Author: John Carbone, Express Logic
Source: www.eetimes.com
Embedded systems employing ARM 32-bit processors support application software that can deliver excellent product functionality and benefits. Faced with the challenge of developing such applications, developers seek robust, powerful development tools to help them bring complex products to market faster that are optimized to perform better than the competition.Rather than settling for "free" or "cheap" tools, commercial developers recognize the benefits of paying more for quality tools and achieving quality results. In this rapidly evolving and competitive market, only the strongest will survive. Manufacturers are well advised to consider the ROI from the best tools available, rather than to seek the lowest cost tools that might leave their product with a competitive handicap in time-to-market and performance.

Developers today need a comprehensive development environment (IDE) and a real-time operating system (RTOS) to help them develop and achieve optimal performance for their application. But not only do developers need the best IDE and RTOS they can find, they need them to operate together to achieve maximum benefits. Each must be of high quality in its own right, but beyond that, they must be engineered to work together and to support each other’s strengths.The steps involved in the development of an embedded application are similar across the spectrum from small to large. A simple example can therefore be used to illustrate how the capabilities of an integrated IDE and RTOS make real product development easier and more effective. The Host-Target configuration might look something like this:

Developing an embedded application involves many steps, including:
Application design
Write source code
Compile code
Link libraries
Download executable to a target board
Run the program on the target
Set breakpoints
Stop the program
Analyze data from the running application

Each of these steps can be handled by an IDE, with its component editor, compiler, linker, and debugger.Especially in the more complex applications, the developer might elect to use an RTOS to schedule application threads, pass messages, and use semaphores and mutexes. The RTOS also should be able to capture event trace information in real time, and enable those events to be viewed graphically on the host. This is critical in a real-time system, since so much happens so quickly that a “big picture” view helps sort things out. An event trace display shows exactly what RTOS services and application-defined events each thread has performed. It also can collect and summarize performance statistics including a complete execution profile, a summary and individual display of all context switches, and a wealth of other useful information.An example application

To illustrate how these RTOS/IDE capabilities can work together to help the developer, consider a simple application. The application consists of 8 threads, "thread_0" through "thread_7." These threads each perform a particular function in an endless loop. The threads are each assigned a priority, from 0 (the highest) to 1024 (the lowest). At any point in time, the RTOS runs the highest priority thread that is “READY” to run. A thread is READY when it is not waiting for anything outside of its own code – for example, waiting for a message to arrive from another thread. In such instances of waiting, a thread is "SUSPENDED" and is not READY to run. Here is a summary of what each thread does:The RTOSA priority-based, preemptive scheduling RTOS, ThreadEx, is used in this demo to manage the eight application threads. Using this example, developers can also understand several services, including messaging, semaphores, mutexes, and sleep and experiment with them in code of their own.To follow the application’s flow and specific actions, the RTOS time-stamps each event and stores related information regarding the event in a circular buffer on the target. This buffer is uploaded to the host and displayed there to reveal the time-based flow of events.The IDEIAR Embedded Workbench for ARM is the development IDE for this demo. This IDE has been tightly integrated with the RTOS, and 4 areas of integration are noteworthy:Project Compatibility. The RTOS is provided in a format that enables it to be used from within the IDE’s project structure. All source code is syntax compatible with the IDE’s compilers and assemblers, making the RTOS ready to use right out-of-the-box.RTOS Awareness. This IDE plug-in captures and displays a wealth of information for all application threads and other defined objects, each time the system is stopped (at a breakpoint or a HALT command).One such display, shown above, is the “Execution Profile.” This is a unique information capture and display that shows profiling information for the application’s threads, interrupt system, and idle time, with a separate breakdown by thread. Note the overall application profile, showing the breakdown of execution time by Idle, Interrupt, and Application. Also, note the individual thread performance profile. As you can see, threads 1 and 2 indeed dominate performance, as described earlier.Macro Operations. Using the IDE’s Macro capability, the IDE can automatically and optionally upload the trace buffer to the Host at a breakpoint. This simplifies and speeds up the process of running the program and then viewing the event trace to see what happened.Launch an External Tool. Another convenient capability enables the launch of an external tool through the “Tools” menu. This can be used to quickly and easily launch a software systems event analyzer—from the IDE.

The Application BuildNormally, to build an application, the developer would have to go through the following steps:Create a new projectAdd a source file to the projectType in the source codeBuild the RTOS libraryAdd the necessary board initialization filesSet compiler, linker, library, and other configuration settingsBuild the application In an IDE with tight RTOS integration, the task becomes much simpler and less error-prone. The RTOS is provided with a sample application, which includes a pre-built RTOS library, application source code, header files, initialization code, and linker directives. It is intended for debugging, with all profiling and trace operations activated, and minimal optimization.This enables the developer to immediately scan the demo code, study the project settings, and only make changes where desired.Breakpoints and MacrosBefore running the application, the developer might set a breakpoint in the code for “thread_0.” Since this thread runs every 10ms, it provides a predictable point at which to examine the events that have transpired over that period of time.Each time this breakpoint is reached, the value of thread_0_counter increments by one, as shown in the debugger’s Watch Window, where expressions can be viewed. But also, the trace buffer can be uploaded automatically from target memory to the Host. This is helpful in case the developer decides to view the events. This is where the IDE’s macro capability can be used. A macro can be performed by the debugger by listing the macro in the breakpoint edit window, as shown below.This is a powerful capability that has many uses. In addition to executing a macro, developers can instruct the program to “continue” after encountering the breakpoint or remain stopped at the breakpoint. We also can tell the debugger not to stop for the first “n” occurrences of the breakpoint, and stop on the “n+1”. Also, a Boolean expression or condition can be evaluated and used to determine whether to stop at the breakpoint. These features give the developer tools to use to help examine what is happening at a particular point in the programRTOS AwarenessMany IDEs offer RTOS Awareness, but all RTOS Awareness implementations are not the same. In addition to tracking all application threads and kernel objects (semaphores, mutexes, queues, etc.), IAR Embedded Workbench for ARM also displays the RTOS Execution Profiling information. In our example, it provides a view of all threads. This is just one of several views available. Note the tabs for other views.Event TraceAn event trace can be a most useful tool for understanding the behavior of a real-time system. While stopped at a breakpoint, we can easily view the events with a single mouse-click, thanks to another IDE capability – the ability to launch an external tool.In this case, the external tool we launch is TraceX, which runs on the Host and provides a timescale view of system events and it can be used to measure the time between events. This provides a valuable measurement tool for determining the time taken to progress from one point of a program to another. As shown below, the highlighted span is 172.5 microseconds. By highlighting the start and stop events of interest, developers have a handy means of examining not just what was done, but how long it took.

Summary: An IDE that is tightly integrated with an RTOS can simplify many of the otherwise tedious operations that a developer must perform in order to run and debug an application. Beyond assuring correctness, this information enables developers to optimize the performance of the application, and to make tradeoffs between responsiveness and throughput, which are often opposing goals.

Verification claims 70% of the chip design schedule!

Human psychology points to the fact that constant repetition of any statement registers the same into sub-conscious mind and we start believing into it. The statement, “Verification claims 70% of the schedule” has been floating around in articles, keynotes and discussions for almost 2 decades so much that even in absence of any data validating it, we believed it as a fact for a long time now. However, the progress in verification domain indicate that this number might actually by a "FACT".

20 years back, the designs were few K gates and the design team verified the RTL themselves. The test benches and tests were all developed in HDLs and sophisticated verification environment was not even part of the discussions. It was assumed that the verification accounted for roughly 50% of the effort.

Since then, the design complexity has grown exponentially and state of the art test benches with lot of metrics have pre empted legacy verification. Instead of designers, a team of verification engineers is deployed on each project to overcome the cumbersome task. Verification still continues to be an endless task demanding aggressive adoption of new techniques quite frequently.

A quick glance at the task list of verification team shows following items –
- Development of metric driven verification plan based on the specifications.
- Development of HVL+ Methodology based constrained random test benches.
- Development of directed test benches for verifying processor integration in SoC.
- Power aware simulations.
- Analog mixed signal simulations.
- Debugging failures and regressing the design.
- Add tests to meet coverage goals (code, functional & assertions).
- Formal verification.
- Emulation/Hardware acceleration to speed up the turnaround time.
- Performance testing and usecases.
- Gate level simulations with different corners.
- Test vector development for post silicon validation.

The above list doesn’t include modeling for virtual platforms as it is still in early adopter stage. Along with the verification team, significant quanta of cycles are added by the design team towards debugging. If we try to quantify the CPU cycles required for verification on any project, the figures would easily over shadow any other task of the ASIC design cycle.

Excerpts from the Wilson Research study (commissioned by Mentor) indicate interesting data (approximated) –
- The industry adoption of code coverage has increased to 72 percent by 2010.
- The industry adoption of assertions had increased to 72 percent by 2010.
- Functional coverage adoption grew from 40% to 72% from 2007 to 2010.
- Constrained-random simulation techniques grew from 41% in 2007 to 69% in 2010.
- The industry adoption of formal property checking has increased by 53% from 2007 to 2010.
- Adoption of HW assisted acceleration/emulation increased by 75% from 2007 to 2010.
- Mean time a designer spends in verification has increased from an average of 46% in 2007 to 50% in 2010.
- Average verification team size grew by a whopping 58% during this period.
- 52% of chip failures were still due to functional problems.
- 66% of projects conitnue to be behind schedule. 45% of chips require two silicon passes and 25% require more than two passes.

While the biggies of the EDA industry are evolving the tools incessantly, a brigade of startups has surfaced with each trying to check this exorbitant problem of verification. The solutions are attacking the problem from multiple perspectives. Some of them are trying to shorten the regressions cycle, some moving the task from engineers to tools, some providing data mining while others providing guidance to reduce the overall efforts.

The semiconductor industry is continuously defining ways to control the volume of verification not only by adding new tools or techniques but redefining the ecosystem and collaborating at various levels. The steep rise in the usage of IPs (e.g. ARM’s market valuation reaching $12 billion, and Semico reporting the third-party IP market grew by close to 22 percent) and VIPs (read posts 1, 2,3) is a clear indicative of this fact.

So much has been added to the arsenal of verification teams and their ownership in the ASIC design cycle that one can safely assume the verification efforts having moved from 50% in early 90s to 70% now. And since the process is still ON, it would be interesting to see if this magic figure of 70% still persist or moves up further!!!

Author: Gaurav Jalan Source: www.semiwiki.com

Sunday, July 1, 2012

Software extends hardware-in-the-loop real-time simulation



Software extends hardware-in-the-loop real-time simulation

Author: Tina George, Source: www.eetimes.com



Somewhat similar to automotive development, in the space industry the design, building and testing of planetary rover prototypes is extremely expensive, and system testing typically does not occur until late in the design/testing process—leading to a long, protracted development time. In response to such timing issues, Amir Khajepour, Canada Research Chair in Mechatronic Vehicle Systems and engineering professor in the Mechanical and Mechatronics Engineering department at the University of Waterloo (Canada), and his team worked with the Canadian Space Agency (CSA) and Maplesoft, to develop a hardware-in-the-loop (HIL) test platform for solar-powered planetary rovers.

The CSA has a strong history of applying symbolic techniques in space robotics modeling—having used these techniques in the design of various space robotic systems deployed on the U.S. Space Shuttle and the International Space Station. This new HIL initiative uses MapleSim, the latest generation of Maplesoft's symbolic modeling technology, to rapidly develop high fidelity, multi-domain models of the rover subsystems.
The team's approach allows component testing within a simulation loop before a full rover prototype is available, essentially creating a virtual testing environment for the component under test by “tricking” it into thinking it is operating within a full prototype. Using the MapleSim modeling and simulation tool, high fidelity and computationally efficient models were created for this real-time application.
With this test platform, scenarios that are hard to replicate in a lab setup (such as the Martian environment) or components that are not yet available, can be modeled while hardware components that are available can communicate with these software models for real-time simulations. The goal is to progressively add hardware components to the simulation loop as they become available. In this way, system testing takes place even without all the hardware components, bridging the gap between the design and testing phases.
The main advantage of this approach is that it significantly reduces the overall project development time. In addition, this method allows for component testing under dangerous scenarios without the risk of damaging a full rover prototype.
Rover kinematicsBesides simulating the rover dynamics, the MapleSim modeling environment was used to automatically generate the kinematic equations of the rover.

These equations then formed the basis for other tasks in the project such as HIL simulations, rover exploration-path planning, and power optimization. The modular system setup also enables users to quickly change the rover configuration and explore different approaches in a short time.
Hardware-in-the-loop frameworkThe figure below shows an overview of the test platform. Information regarding the rover’s position, orientation, tilt, speed, and power consumption (obtained from dynamic models of the rover) is used as input to the software models. A library of rover components was developed within MapleSim and imported within LabView Real-Time where the HIL program and GUI of the simulations were developed. The program was then uploaded to the embedded computer within National Instruments PXI, where communication between the hardware components and the software models was established and the real-time simulation run.

“Due to the multidomain nature of the system (mechanical, electrical and thermal), it was desirable to model all the components within one modeling environment such that critical relationships can be easily discovered. In addition, computational efficiency is crucial in real-time simulations,” said Khajepour. He added, MapleSim was found to be an excellant environment for the application because of its multidomain abilities, use of symbolic simplification for higher computational efficiency, and ease of connectivity to LabVIEW.”
Development of custom components and power managementIn addition to making use of MapleSim’s built-in component library, custom components were also readily developed. A model to estimate the solar radiation that a tilted surface (i.e. solar panels) would receive on Mars was implemented using MapleSim’s Custom Component Block. This model took into account the sun’s position, the rover’s latitudinal and longitudinal position as well as orientation and tilt as it traveled from point A to point B. This was used together with a solar array model (see figure below) to estimate the power generation of a rover throughout the day.

“The intuitive nature of MapleSim allowed my team to create high fidelity models in a short period of time,” said Khajepour. “This played a key role in the success of this modular HIL test platform which allowed for component testing, power level estimation, as well as the validation of power management and path planning algorithms.”
The team also used MapleSim as a tool in an earlier part of the project to develop the power management system of the autonomous rovers. They used the software to rapidly develop high-fidelity, multidomain models of the rover subsystems. The goal was to develop a path planning algorithm that took rover power demands (and generation) into account. Using the models developed, the path planner determined the optimum path between point A and point B, such that the rover maintained the highest level of internal energy storage—while avoiding obstacles and high risk sections of the terrain.
Step one of this three-year project was to develop the initial rover model, including such aspects as battery, solar power-generation, and terrain and soil conditions. Including a full range of HIL testing phases with real-time hardware and software using system models was critical for optimizing system parameters that maximized power conservation while still achieving mission goals.
“With the use of MapleSim, the base model of the rover was developed in a month,” says Khajepour. “We now have the mathematical model of the 6-wheeled rover without writing down a single equation. MapleSim was able to generate an optimum set of equations for the rover system automatically, which is essential in the optimization phase.” The symbolic techniques that lie at the heart of the software generate efficient system equations, without loss of fidelity—thus eliminating the need to simplify the model manually to reduce its computational complexity.
GUIKhajepour also noted the graphical interface. In MapleSim, a design engineer can readily re-create the system diagram on his/her screen using components that represent the physical model. The resulting system diagram looks very similar to what an engineer might draw by hand. MapleSim can then easily transform the models into realistic animations. These animations make it substantially easier to validate the system diagram and give greater insight into the system behavior.
“The ability to see the model, to see the moving parts, is very important to a model developer,” says Khajepour. “I am now moving to MapleSim in most of my projects.”

Sunday, June 24, 2012

Standardisation as a key to growth for Embedded systems

Source:  automotiveIT international ,  Author : Mr.Arjen Bongard


Embedded systems in the car will grow, according to a German survey. Automakers and suppliers see standardization as key to the growth of embedded systems, according to a survey of decision makers in the auto and aerospace industry.
They also believe hardware and software can in future come from different companies and they see relatively good potential for service providers to play an important role in the growth of embedded systems.

The survey was carried out by the forsa agency for market researchers F.A.Z Institute and automotive electronics service provider ESG. The researchers polled 100 decision makers in the German auto and aerospace industries about their views on the growth of embedded systems.

“The view that software and hardware can be separated was mildly surprising,” said Jacqueline Preusser, the research analyst who coordinated the project at the F.A.Z Institute. “Those two sides are still connected at the moment.” According to the survey, 86 pc of automotive executives polled felt that infotainment hardware and software could come from different providers in future. For visualization components, 77 pc felt this way, while the percentage stood at 74 pc for driver assistance systems. Auto executives were less convinced of this trend for embedded systems supporting powertrain and engine electronics.

Growth seen
Most executives expected the market for embedded systems to grow moderately or strongly. Embedded systems manage a specific function in the car through a combination of hardware and software that includes mechanics, sensors and various electronic components.

Hans Georg Frischkorn, the head of ESG’s automotive operations, said the poll underscored how important embedded systems are for future innovations in the car. In a press briefing at ESG headquarters here, he particularly cited the growth in networked functions. “It will  be very exciting to see the smart car, the smart grid and the smart home all networked together,” he said.

Almost all executives interviewed said networked functions would grow most in importance when it came to r&d trends for the next five years. Safety-and-security and miniaturization were seen as the second and third-most important, respectively. Frischkorn said the connected car will mean increasing complexity in electronic systems. “Standardization is necessary to manage that complexity,” he said.
In the auto industry, 36 pc of executives polled, considered standardization the biggest challenge for the next five years. User friendliness came second with 14 pc and compatibility of hardware and software was third with 12 pc.

System integration
With embedded systems increasing in cars, 95 of the 100 executives polled agreed that system integration was becoming more important. In the auto industry, most – 56 pc – agreed that automakers will continue to manage this challenge. But 42 pc also saw a growing role for engineering service providers.
ESG’s Frischkorn said that, given the multitude of tasks a carmaker has to manage, engineering service providers will play a role in system integration. Said the ESG executive: “It’s not an either-or question.”