Thursday, October 18, 2012

FPGA's vs. ASIC's


Deciding between ASICs and FPGAs requires designers to answer tough questions concerning costs, tool availability and effectiveness, as well as how best to present the information to management to guarantee support throughout the design process.
The first step is to make a block diagram of what you want to integrate. Sometimes it helps to get some help from an experienced field applications engineer. Remember that time is money. Your next move is to come up with some idea of production volume. Next, make a list of design objectives in order of importance. These could include cost (including nonrecurring engineering charges), die size, time-to-market, tools, performance and intellectual property requirements. You should also take into account your own design skills, what you have time to do and what you should farm out. Remember that it must make sense financially or you are doomed from the start.
Time-to-market is often at the top of the list. Some large ASICs can take a year or more to design. A good way to shorten development time is to make prototypes using FPGAs and then switch to an ASIC. But the most common mistake that designers make when they decide to build an ASIC is that they never formally pitch their idea to management. Then, after working on it for a week, the project is shot down for time-to-market or cost reasons. Designers should never overlook the important step of making their case to their managers.
Before starting on an ASIC, ask yourself or your management team if it is wise to spend $250,000 or more on NRE charges. If the answer is yes and you get the green light, then go. If the answer is no, then you'll need to gather more information before taking the ASIC route. Understand that most bean counters do not see any value in handing someone $250,000 for a one-time charge. They prefer to add cost to the production.
Say your project has a NRE of $300,000, a volume of 5,000, and it replaces circuitry that costs $80. The final ASIC cost is $40. You do some math and determine the break-even point is three years. If you amortize the same design over five years, this could save your company $400,000 even after NRE has been absorbed.
Another option is to do a "rapid ASIC" using preformed ASIC blocks, which saves time and lowers NRE costs. It could also make sense to convert an FPGA to ASIC directly, which lowers NRE a small amount from the rapid type.
Now let's say your company will not fund an ASIC effort. That means it's time to consider FPGAs. First, be aware that while the tools are free on the Web for the smaller FPGAs, you'll have to pay for a license file for the ones with high gate counts. The good news is that there are no NRE charges.
Modern FPGAs are packed with features that were not previously available. Today's FPGAs usually come with phase-locked loops), low-voltage differential signal, clock data recovery, more internal routing, high speed (most tools measure timing in picoseconds), hardware multipliers for DSPs, memory, programmable I/O, IP cores and microprocessor cores. You can integrate all your digital functions into one part and really have a system on a chip. When you look at all these features, it can be tough to argue for an ASIC.
Moreover, FPGA can be reprogrammed in a snap while an ASIC can take $50,000 and six weeks to make the same changes. FPGA costs start from a couple of dollars to several hundred or more depending on the features listed above.
So before you get moving, make sure to enlist some help, get the managers to support you, come up with a meaningful cost estimate, choose the right weapon -- be it ASIC or FPGA -- and then move into production.

Author : Jeff Kriegbaum
Source : http://www.design-reuse.com/articles/9010/fpga-s-vs-asic-s.html







Monday, October 8, 2012

Is DDR4 a bridge too far?


We’ve gone through two decades where the PC market made the rules for technology. The industry faces a question now: Can a new technology go mainstream without the PC?

By now, you’ve certainly read the news from Cadence on their DDR4 IP for TSMC 28nm. They are claiming a PHY implementation that exceeds the data rates specified for DDR-2400, which means things are blazing fast. What’s not talked about much is how the point-to-point interconnect needed for large memory spaces is going to be handled. 

Barring some earth-shattering announcement at IDF, Intel is way far way from DDR4. The nearest thing on their roadmap is Haswell-EX, a server platform for 2014. (Writing this when IDF is just getting underway is tempting fate, kind of like washing my car and then having it immediately rain.) AMD has been massively silent on the subject of DDR4 fitting into their processor roadmap. 

Meanwhile, both Samsung and Micron are ramping up 30nm production of DDR4, and Samsung is publically urging Intel to get moving. Both memory suppliers are slightly ahead of the curve, since the DDR4 spec isn’t official just yet. However, JEDEC has scheduled the promised DDR4 workshop for October 30, something they said would approximately coincide with the formal release of the specification. (In other words, it’s ready.)








We also have to factor in that LPDDR3 just hit the ground as a released specification this May, and memory chips implementing it won’t reach the pricing sweet spot for another year. Most phone manufacturers are still using LPDDR2 for that reason. (Again, iPhone 5 announcement this week, rain on my post forecasted.) Tablet types are just starting to pick up LPDDR3, amid talk the first implementations already need more bandwidth.

So, why the push for DDR4, especially in TSMC 28nm? DDR4 is obviously the answer to much higher memory bandwidth for cloud computing and the like. I’m sure there are others out there, but the was easy to find.

Interest in DDR4 has to be coming from somewhere in the ARM server camp, otherwise Cadence and TSMC wouldn’t be spending time on it. In spite of the power advances, DDR4 is no where near low-power enough to show up in a phone, and there’s no sign of a LPDDR4 specification yet. ARM 64-bit server implementations are just getting rolling, and Applied Micro’s X-Gene has sampled – with DDR3.

The volume driver for DDR4 – if it’s not PCs – is in question. The natural progression of speed that the PC markets have pushed for looks like it is about to run smack into the economics of affordable implementations, and that in turn could make life for the memory manufacturers interesting. (In a related side note,Elpida’s bondholders have come in saying the Micron bid is way too low.) Or, Intel and AMD could jump in and force the issue, betting on adoption farther down their PC supply chains.

DDR4 and IP supporting it in ARM server space could prove to be a turning point for technology investment, an inflection point in the way things have been done and a change from the PC driving. Or, it could end up being a bridge too far, but paving the way for another specification suited for mobile devices.

What are your thoughts on the outlook for DDR4, LPDDR3, an ARM server market, and the overalldynamics of PCs, servers, tablets and phones versus memory technology?

Author :Don Dingee
Source : http://www.semiwiki.com