Category: BLOG

  • The In-House Custom Semiconductor Chip Development

    The In-House Custom Semiconductor Chip Development

    Photo by Jason Leung on Unsplash


    THE REASONS TO DEVELOP CUSTOM SEMICONDUCTOR CHIP IN-HOUSE

    As technology is progressing and touching every aspect of day-to-day life, the dependence on semiconductor solutions is also growing. These solutions are often made by semiconductor companies and can power several things from sensors to a smartphone to cars to satellites to name a few.

    One of the most critical infrastructures that the semiconductor industry powers are data centers and portable computing systems. These two systems are interconnected as one cannot do without the other. Today, majority of the request a smartphone users sends ends up in one of the numerous data centers around the world. The data centers then quickly crunch the request and send it back to the requesting user. As the customer base and internet users grow, there is a surge in demand for power-efficient computing systems (both data centers and portable computing systems) by the software or data-driven companies/industry.

    Data-Driven Industry Is Getting Into Custom Semiconductor Chip Development

    The big software and data crunching companies are often dependent on specific semiconductor solution providers who have been powering their data centers and portable computing systems for decades. The silicon chip these semiconductor companies design often falls in the category of the general-purpose chip, so the same is used by different customers even though their requirements might differ. So far, general-purpose strategy has worked wonders. However, as the software industry explodes (due to data), the big giants are realizing the importance of powering their data centers (and in some cases portable computing systems too) by developing custom chips in-house.

    This change in landscape is mainly because the data crunching companies understand the need, purpose, and features that they require to drive bottleneck-free solutions for their customer. This can only be possible by starting chip development in-house so that software companies can deploy custom chip solutions across their data centers to drive services more efficiently. This is evident from the fact that YouTube has deployed its chip for video transcoding for faster processing of videos, and even Microsoft’s Pluton secure chip solution for its Windows platform.

    While providing better solutions is certainly the main goal of developing the custom chip, there are several other reasons too. All these reasons ensure the in-house chip development by non-semiconductor companies is a win-win idea or not.

    Cost: One of the major driving factors of developing chips in-house (at lead the designing part) is the cost. Having control over what chip needs to be designed and how to deploy it (as per the features) can potentially enrich user experience while bringing in savings. Savings are captured mainly in the form of usage when different computing systems within the company start utilizing the custom solutions. In many cases, the benefits can also be gauged based on how much power savings are achieved (data centers) compared to the traditional outsourced general-purpose solution.

    Time-To-Market: Another benefit of designing custom semiconductor chips is for companies whose end product is a smart solution. This can range from kitchen appliances to television to desktops and many more. Having the ability to design and create chips in-house can allow greater control over launching products and takes away the uncertainty that general-purpose solutions provide. This is very true for data centers that heavily rely on x86 architecture solutions to drive future data centers.

    Flexibility: Software changes very quickly and can demand new features out of the silicon chip. If there is no in-house development, then all these requests will eventually have to go out of the company in form of outsourcing. If there is a dedicated silicon development in-house team, then the software team can work in collaboration with the internal team (safeguarding IPs) to drive better hardware-software systems to power emerging solutions.

    Features: If a company is selling laptops and relies on an outside vendor for chip development, then it makes them vulnerable due to dependency. Incorporating chip development in-house can provide a way to balance the chip requirement that can drive better systems. This can also push outside vendors to bring new features and in the long term, the competition helps the industry at large.

    Applications: Developing in-house semiconductor chips can also provide avenues to expand the application area. This can be very true for smart device providers who often have to build systems based on what is available in the market. In-house chip development activities if planned well can allow companies to expand their portfolios by driving new end-products for their customers.

    Dependency: Companies that are into data centers are heavily dependent on different companies for silicon chips to power their systems. Many of these solutions are not specifically designed to cater to everyone procuring the company’s request. This makes the data center companies heavily reliant on external factors to driven in-house innovation which today certainly requires custom chips.

    All of the above reasons are the driving factor that is pushing several big software companies to drive in-house semiconductor chip development plan.

    It is also true that not all companies have the need or focus to create such custom solutions. But in the long run, as the dependency on the silicon chip grows, the risk associated with not developing an in-house semiconductor chip might be far greater than not planning for it.


    Picture By Chetan Arvind Patil

    Picture By Chetan Arvind Patil

    THE REQUIREMENTS TO DEVELOP CUSTOM SEMICONDUCTOR CHIP IN-HOUSE

    Developing semiconductor solutions is not an easy task. Even for big software giants, it has taken years of planning and execution to come to a stage where they can deploy custom in-house developed silicon solutions across data centers and portable computing systems. This is why it is important to understand the different requirements that are the driving factor in ensuring the in-house semiconductor chip is impactful and profitable at the same time.

    In-house silicon chip development requirements do take time to execute and often require tons of resources apart from the time it takes to perfect a semiconductor chip solution.

    Team: The most important criteria for developing a successful in-house chip is to ensure that there is a team with an excellent set of skills to execute custom chip development flawlessly. The team often has to be a combination of excellent design and manufacturing skills. This means hiring individuals who have been in the semiconductor industry for a long time and are capable of developing semiconductor solutions via long-term research and development. A dedicated manufacturing team is also critical to bring ideas to life.

    Acquisition: The team is one part of the development of in-house silicon chips. Another part is the ability to ensure that the company can acquire outside assets (IPs and patents) as and when required. This greatly pushes the in-house development activity in a positive direction and many cases reduce the efforts required to bring in-house silicon chip development to reality.

    Investment: Managing teams, labs, and other resources often require a massive amount of money. If a company without a semiconductor background is entering in-house chip development activity, then the company should ensure there is a large amount of investment available for a very long time. This is why it is important to ensure that over the long period of chip development process and research, the investment activity will pay off in the long run.

    Roadmap: In-house chip development also means having a clear strategy as to why the company should do it. Having teams and resources to tackle one specific feature without a plan is not a good strategy to invest time and money behind in-house chip development. Major emphasis should be on the long-term plan and how it will benefit the company. This often requires a clear roadmap is a must-have requirement.

    Balance: Not all semiconductor solutions require in-house development, and that is why it is very important to balance the focus in terms of what part of the silicon requirement should be outsourced and which is worth developing in-house. It is not possible for software or data-driven companies to become full-fledge semiconductor solution providers overnight and no single company (even core semiconductor) develops everything in-house. This is why a filtering mechanism of balancing the in-house and outsourcing is important.

    Bottlenecks: Major criteria of in-house development of silicon chip is also to remove any barrier in developing new products. The roadmap should allow bottleneck-free development of in-house semiconductor products as long they meet the company’s requirements.

    The reasons and requirements showcase how and when the non-semiconductor companies should get into the semiconductor design segment. In-house semiconductor development has already started long back and many of the companies (Google, Microsoft, and Amazon to name a few) have already enjoyed success around it. The major reason for doing so has been the greater control of designing features that in reality removes the issues the companies were facing.

    This trend of taking things in hand and designing solutions in-house is certainly going to continue, more so due to the semiconductor shortage and the impact it had on several industries.


  • The More-Than-Moore Semiconductor Roadmap

    The More-Than-Moore Semiconductor Roadmap

    Photo by Jeremy Zero on Unsplash


    THE BUILDING BLOCKS OF SEMICONDUCTOR ROADMAP

    The semiconductor industry has enjoyed the success of doubling (every two years) the number of transistors in a silicon chip, which has allowed semiconductor companies worldwide, to offer novel semiconductor products and solutions. This is exactly what Moore’s law predicted when it was proposed around four decades ago.

    Increasing transistor density per given area allows computer systems to cater to multiple (and numerous too) requests at the same time. This is why in 2021 a smartphone is capable of crunching the data that in the 1980s would require a giant server.

    However, as the semiconductor world marches towards 3nm mass production (with 2nm already showcased by IBM), there is a growing concern about whether or not Moore’s law will keep pace with the advancement in the technology-node (mainly shrinking transistor size) and what are the alternate solutions.

    More-Than-Moore Solutions Have Been In Work For Last Two Decades.

    The answer to this problem lies in the different unique solutions that the semiconductor industry has been working around in the last couple of decades. The semiconductor industry knew there is going to be a time when Moore’s law will not be applicable as it is today and a course correction would be needed.

    This course correction has led to numerous design to manufacturing changes that have enabled silicon chips to provide more performance and better power consumption without compromising on the area. These solutions have been built on top of different semiconductor product development processes which have come together to drive next-gen workloads without worrying about the future implications of Moore’s law.

    Design: To drive innovative solutions that defy Moore’s law by providing similar/better performance and lower power consumption often requires a novel design. These designs can at the circuit level or the system level. The combination of both enables richer design solutions. Like AMD’s chipset-based CPU and GPU design or Appel’s M1 SoC. All these design methodologies drives next-gen solutions that are needed to run future workloads optimally. Such designs often require years of research and development that leads to patents and IP. TVS is another design solution that has allowed novel chip designs.

    Node: When it comes to choosing a technology-node for a high-performance device like XPU, the choice is always to go for the best out in the market. This is why companies like TSMC, Samsung, IBM, and Intel are racing to provide the most advanced solution possible. However, the base of the technology-node is a transistor, and to drive a next-gen technology-node that packs more transistor than its predecessor, requires alternate (and better) FET solutions. This is why new scaling CMOS solutions by leveraging new FETs designs is being explored. This started with MBCFETs and soon will move towards forksheet based FETs.

    Memory: To drive data-driven workloads efficiently, memory plays a crucial role. As the design changes to accommodate More-Than-Moore solutions, the memory organization and interface also need to change. This has lead companies like Samsung to come up with High Bandwidth Memory (HBM) to power next-gen AI processing solutions. Similarly, Micron has come up with alternate solutions called HBM2E. Advancement in-memory solutions are vital to ensure any Moore-alternate solutions to drive chips are backed up by faster data processing and data transfer.

    Package: Silicon chip is nothing but a die from a wafer that gets packaged before being mounted onto the application system. With Moore’s law the internals of the chip was doubling (transistor mainly) to enable more performance, and this has to lead to alternate package technology over the years. This can range from WLCSP to WLFO and beyond. Even the new design methodology of chiplets has to lead to alternate package technology from companies like TSMC, which came up with Chip-on-Wafer-on-Substrate (CoWoS), a 2.5D based package technology to drive next-gen chiplets solutions. To keep up with More-Than-Moore, new package technologies will keep coming out in the market.

    Interconnect: As the number of blocks and processing units inside a given chip increases, the need to transfer data faster from one point of the chip to another has also increases. This is why researchers and several companies are focusing on photonics as an alternate. This can ensure the data is not only transferred without adding bottlenecks but also makes sure there is no loss of data. All this while not increasing the power consumption.

    Manufacturing: In the end, all design to interconnect process boils down to the fact that whether the solution is manufacturable. New design process and solutions often require close interaction with the equipment manufacturers, FABs and OSATs. This is why, based on years of development, the semiconductor manufacturing industry is moving towards EUV to drive next-gen manufacturing capability. This is going to not only enable 3nm/2nm technology-node but will also drive different package and interconnect solution that have been proposed in the last few years.

    Different methodologies discussed above have enabled alternate solutions that leverage Moore’s law but by adapting the new design and manufacturing strategies that ensure there are no bottlenecks.

    These solutions range from having a compact chip with all the possible processing blocks to solutions where processing blocks are taken out of the chip and spread across the system. Some solutions also take a different approach of stacking the silicon in such a way that the best of 2D and 3D chip designing comes together to provide a rich user experience.

    All these solutions combined are leading the semiconductor industry towards a Moore-Than-Moore world.



    THE MORE-THAN-MOORE SEMICONDUCTOR ROADMAP

    The semiconductor industry has implemented several solutions that can be considered as an alternate to Moore’s law and also have been around for many years. These alternate solutions focus on how the design and manufacturing process should be handled to ensure there is always a way to drive more power out of the given silicon chip. All these solutions have been designed without focusing much on the transistor density or technology-node.

    It will not be wrong to say that in doing so the semiconductor industry has created itself a pathway to drive into the More-Than-Moore world.

    Below are the four major milestones in the last couple of decades that have established the roadmap for the More-Than-Moore world. Few of these have been known to the semiconductor industry for a very long time and little emphasis has been given as to whether this design and manufacturing solution can provide a path towards More-Than Moore or not. In reality, the solution indeed provides a way after Moore’s law ends.

    System-On-A-Chip – SOC: SOCs have been around for a couple of decades. The need for multi-core systems coupled with graphics, audio, and video processing lead to SOC. SOC allowed different sub-blocks to reside on a single die area and provided a strong challenge to semiconductor designers and manufacturers. The first major reason was the complexity involved in ensuring the design works as expected and the second reason was the ability to produce high-yielding wafers. SOC has had a mix of both the best and the worst of the semiconductor product development process. Some solutions have to see the end of a life well before the planned date and on another hand, some SOC solutions lasted more than their life span. In the end, SOC provided a way to club complex and required solutions in the smallest area with the help of shrinking transistor size. However, this can only last till the power and thermal profile of the solutions makes technical sense and with challenging process development (shrinking transistor size), SOC may not survive the marking for long but during their time allowed a way to club different features under the same die area.

    Multi-Chip Modules – MCM: MCM is a step ahead of SOC. It borrows all the ideas of SOC but brings different types of SOC together on a single platform. The communication between different SOC or ICs is then established using the high-speed interface. This has enabled several XPU (Xeons to Ryzen) based solutions that can diversify the design and manufacturing of the blocks and then leverage interface technology to ensure the data communication is as good or at-par with SOC solutions. Many argue that chiplets design is one form of MCM and in the last couple of years chiplets have taken over the SOC world, and MCM is considered to be the true step towards the More-Than-Moore world.

    System-In-A-Package – SIP: SIP takes the best of MCM and SOC to come up with chip solutions that allow 3D-based stacking of different blocks. The interposer or TVS has played a pivotal role in enabling SIP. The goal of SIP is to take the 2D area and covert it into 3D by stacking the different blocks of SOC/MCM on top of each other. This way the area consumption decreases which 2D solutions like SOC and MCM cannot achieve without using advanced technology-node. SIP does have a drawback as they suffer from thermal and packaging challenges. With advanced technology-node nearing 1nm, SIP might be the best More-Than-Moore solution to provide an alternative to chip designing compared to MCM and SOC.

    System-On-Package – SOP: All the above three More-Than-Moore alternate solutions are designed with the focus that the end system is going to be printed circuit board on top of which the SOC/MCM/SIP system will reside. However, this does not help the smaller devices like smartphones where the goal is to ensure that there is more room for battery by shrinking the board area. To shrink the board footprint, SOP is the best way to design a computing system. SOP takes different chips (either a SOC, MCM, or SIP) and then brings all these individual chips inside a single package. The complexity to achieve an elegant SOP system is way too high. It requires not only synchronization of different types of system/devices (SOC/MCM/SIP) but also a standard interface that can allow packaging of all devices while ensuring that there is no bottleneck or leakage. SOP if done correctly might very well end the need for board and allow a more compact silicon solution while defying Moore’s law.

    The above four semiconductor design and manufacturing alternatives certainly provide a way to design chips (mainly XPU) such that there is no need to worry about packing more transistors in the smallest area.

    From SoC to SOP, the solutions simply take the silicon area out of the context by bringing different sub-systems together but in a unique way, and it pushes the FABs and OSATs to come up with manufacturing technologies (which many FABs and OSATs already have) that can ensure the sub-systems work seamlessly even though there are disaggregated.

    As the semiconductor industry inches towards 1nm technology-node, SOC/MCM/SIP/SOP based chip solutions are certainly going to provide a roadmap for More-Than-Moore solutions.


  • The Reasons And Mitigation Plan For Semiconductor Shortage

    The Reasons And Mitigation Plan For Semiconductor Shortage

    Photo by Marc PEZIN on Unsplash


    THE REASONS FOR SEMICONDUCTOR SHORTAGE

    In today’s market, the majority of the consumer and enterprise products are heavily equipped with semiconductor products (silicon chips). Over the last few years, the share of semiconductors in modern products has increased steadily. From automotive to smartphone to smart devices to aerospace, everywhere semiconductors are present. This has made semiconductors the building blocks of modern infrastructure.

    These semiconductor products (silicon chips) require a lot of precision and time to manufacture. Any gaps in the manufacturing flow can eventually have negative consequences, which not only has an impact on the semiconductor manufacturers but also on the end products that are using these tiny silicon’s, and this is exactly what is happening since 2020.

    Semiconductor Shortage Is A Combination Of Both The Design And The Manufacturing.

    Shortage in the semiconductor industry not only impacts the semiconductor industry itself, but it ends up costing a lot to all the companies that are heavily reliant on these products. This is why automotive production has been halted, consumer electronics are not available easily in the market, and many other several consequences.

    So, what is the reason for the semiconductor shortage?

    Shortage in the semiconductor products is not because of one specific reason. To stop an industry like semiconductor from manufacturing, several negative factors have to come together. Unfortunately, this is what has lead to the shortage of semiconductors as the factors affecting it have introduced gaps in the manufacturing flow.

    Below are the major contributing factors for the semiconductor shortage:

    Forecast: Forecasting is an important part of ensuring that there is no wastage and all the customer demands are met in time. This eventually leads to efficient supply chain management. However, the forecast is not always accurate. It relies on many factors. This can range from market demand, a customer moving to a new solution, better cost alternative, and many more. For the semiconductor shortage that started in 2020, the reasons are majorly due to the market demand. Due to COVID-19, several facilities have to be closed down and this forced consumers and businesses to work remotely. This leads to the sudden surge in demand for smart solutions (one of the several such surges) and eventually increased the semiconductor demand (breaking the forecast). This prompted the companies to play safe by stocking (manufacturing) more devices than planned, which eventually put pressured on the semiconductor manufacturing capacity which lead to slow movement of silicon development, eventually putting them through the tough time of managing never seen before capacity management.

    Shutdown: Semiconductor FABs are designed to run 24×7. The facilities are so complicated that any kind of shutdown can take weeks to recover, and will eventually lead to a shortage in silicon chip delivery which in turn halts the product of several other dependent industries (automotive for example). This exactly is what has happened in the last few months. Some FAB has to be shut down due to COVID-19, some due to extreme weather climate, and few due to fire hazards. All the FABs that were impacted were large facilities catering to the core products/solutions. Once a shutdown happens it becomes difficult to re-run the FAB quickly without proper checks to ensure there are no blocking points during the manufacturing flow.

    Advanced Node: A smart product is made of different electronic chips. Each of these chips is using a different technology-node. However, the smartest and the most critical pieces in these devices are using the most sophisticated technology-node out in the market. Unfortunately, there are not many semiconductor FABs that are making advanced nodes. This puts a lot of dependency on these FABs and any shortfall in the production is eventually going to have an impact on the end product. The surge in demand in one segment (relying on the advanced node) has lead to a shortage of silicon products (using advanced node) in the other market segment. There is no time to expand the facility and this has eventually lead to the shortage of advanced node silicon.

    Human Resource: Semiconductor FAB is highly automated but eventually does require human intervention. There are several tasks that have to be carried out manually and all these tasks are part of building the production wafers. COVID-19 lead to the curtailing (for their own safety) of people inside the FAB and this slowed down the production movement. The slowing of FABs is not good for the industry relying on semiconductor products. This eventually leads to slow production and contributed to the shortage.

    Supply: Supply is not only about shipping the semiconductor product out of the FAB. It is also about ensuring the wafers and assembled part keep moving ahead till the end product has been assembled. Unless and until all the silicon chips are available, the end product (television for example) cannot be assembled. The semiconductor shortage is not about all the silicon products that go insider a device (for example smart camera), it is more about several other components that come from different FABs and facilities. Any supply constrains any of one the supply points can introduce shortage.

    The above points are a handful amount of reasons. In reality, there can be more valid reasons for the shortage. In the long run, the semiconductor industry will overcome all the shortages and will also learn from them.


    Picture By Chetan Arvind Patil

    Picture By Chetan Arvind Patil

    THE MITIGATION PLAN FOR SEMICONDUCTOR SHORTAGE

    Shortage of any product (groceries to cars to semiconductor) eventually does get over. It indeed takes time and also leaves behind learnings that should be leveraged to overcome any such scenario in the near term.

    When it comes to a high-tech industry like semiconductors, there is no single answer to semiconductor shortage avoidance. The shortage in the first place was contributed due to many factors. Based on the market situation below are the few points that can help mitigate the shortage in the future:

    Older Node: Moving all the critical semiconductor solutions to the advanced node without building capacity is not the way forward. The FAB-LESS/IDM semiconductor design houses have to go back to the drawing board and understand how to diversify the technology-node usage based on the available capacity. Of course, the technology combination should eventually meet the specification, but the end goal should be to also consider how the market capacity (reality) is and will there be any capacity constraint if different shortage reasons come together again in the near future.

    Internal Capacity: While IDMs already have the internal capacity that they can leverage to capture sudden increase in semiconductor demand. However, there needs to be a thorough review of what type (solutions) of capacity is in-house and how to balance the capacity against the external one. This can allow any of the external capacity shortages to be absorbed internally and thus helps in mitigating any pitfall.

    Backup: Semiconductor products eventually get tied to a specific manufacturing flow that includes FABs and OSATs. It takes years to move these products to newer facilities. This is why semiconductor companies should start qualifying their products for multiple facilities to ensure any gaps/shortage at one location is fulfilled by the backup option.

    External Capacity: Pure-Play foundries are very crucial to the semiconductor industry. They play an important part in ensuring that the product meets the end customers’ demand. However, in the last couple of decades, there has been growing reliance on external capacity. There is nothing wrong with it and not all semiconductor companies can put so much money in the semiconductor FAB. Still, the problem arises when there is a constraint in the external capacity as external can be pre-booked or pre-occupied by any entity in the world. This puts pressure on other semiconductor design houses that rely on external capacity but do not have enough capacity pre-booked. This has promoted an important discussion to build more external capacity that caters to not only today’s demand but to the demand of the next few decades.

    Modularity: Both the design houses and the manufacturing facilities will have to quickly adapt to the modular approach. This modular approach can be about using any technology-node possible and also using any semiconductor manufacturing facility that is available. This will be a daunting task but should be doable.

    Semiconductor design and manufacturing both play a crucial role in product development. Shortage in the semiconductor product is not only about the manufacturing but also about the design constraint that hinders flexibility in the manufacturing flow.

    This is why the semiconductor shortage in 2020/2021 should not only be seen from the manufacturing aspect but also from the design point of view too.


  • The Costly Semiconductor Data

    Photo by Jorge Salvador on Unsplash


    THE COSTLY LIFE CYCLE OF SEMICONDUCTOR DATA

    The importance of data in different industries has only grown over the last few decades. This has in turn given rise to different new techniques and tools that enable enterprises to capture and analyze data on the go. It will not be wrong to say that today, the most prized commodity in the market is data.

    The same story is applicable in the semiconductor industry. The shrinking transistor size and the need to enable efficient devices has increased the importance of capturing data from all the possible semiconductor product development process. However, the major hurdle in doing so is the cost associated with the process.

    Semiconductor Data Is The Next-Gen Oil.

    Shrinking transistors enable better devices and empower the long-term product development roadmap. Though, when it comes to the cost part of it, things start to get complicated. Cost is also the major reason why there are only three players (Intel, Samsung, and TSMC) that are battling the 5/3 nm race. The cost required to set up the FAB and the support system (equipment, facilities, tools, resources, and many other things) is too high and often requires long-term investment-wise planning. Even building a MINI-FAB today, will require billions of dollars to set up, and there on will take years to break even.

    Setting up smaller research and development facility is an efficient way to capture semiconductor data, but it is not feasible to rely on smaller labs/setups for too long. In order to meet the worldwide demand, the facilities eventually have to expand.

    – MINI-FAB >$1 Billion

    – MEGA-FAB > $4 Billion

    – GIGA-FAB > $12 Billion

    – MINI/MEGA/GIGA = Defined Based On Wafer Capacity.

    This makes the process of capturing and handling the semiconductor data crucial. Any data point that comes out of the pre-silicon or post-silicon stage has to go through a specific life cycle before being stored for long-term usage. This life cycle data handling process itself adds additional cost apart from the FAB investment. In the long run, the semiconductor companies understand the importance of setting up the data life cycle flow and have always invested relentlessly both in the process that leads to silicon and also the process required to generate data out of different silicon products.

    Below is an overview of how the semiconductor data is handled and why each of these processes is vital. In nutshell, these steps are no different than how any big data will get handled. When it comes to the semiconductor industry, the major difference is the effort (cost and resources) it takes to generate data from different types of semiconductor products that often require large setups.

    Generation: Generating semiconductor data requires a silicon wafer (with dies that are testable) and a test program that can run on the full wafer. Both of these processes demand different sets of tools and resources. A dedicated FAB is tasked up with creating a silicon wafer that has the product printed (repeatedly) across the full area. Which in itself is a costly process. On the other hand, a dedicated tester environment (OSAT) with different hardware and equipment is required to drive the test program. Such a long and delicate process not just requires product but also manufacturing, logistics, handling, and equipment resource. The sum of all these investments eventually allows semiconductor data to be generated. And without going into details, is understood how costly and time-demanding process this is.

    Cleaning: Generating data out of the silicon is the first step. As explained above, it requires different sets of hardware equipment to drive the semiconductor data generation. The data in the majority of the cases are generated in a standard format, but still require a lot of post-processing and elimination techniques to make sense of it. This cleaning process is more on the software side and demands data processing tools that can help engineers understand different features of the silicon data. The cost associated is due to the setting up of the flow that allows semiconductor companies to capture the data at the source, which can then be easily sent to the servers for engineers to retrieve. There on, the cleaning steps start.

    Profiling: Simply collecting random silicon data is not useful. The semiconductor product itself is going to be used in different systems and environments. This environment will push the product through different operating conditions. To ensure the product works under different conditions (temperature, process variation, current, and voltage settings), the development phase pushes the product/silicon through several testing criteria. These are often based on the industry-accepted standards (AEC, JEDEC, IPC, etc.). Carrying out all these tests to gather the promising semiconductor data (that will qualify the semiconductor product for the market) is challenging. The cost associated with it is also on the rise and thus adds another costly layer towards capturing the semiconductor data.

    Mapping: The semiconductor data is often captured in big chucks. This can be at the wafer level or lot level. In both cases, it becomes really important to ensure that the data can be traced back to the die/part it originated from. The process to do so starts much before the test data is available. This can be via different marking to memory-based traceability techniques. This again points to the fact that the data mapping also requires base resources to achieve the target of not only ensuring the semiconductor data is available, but it is also easy to map it back to the source.

    Analysis: Post all the possible data is captured and is available for engineers, the main task starts. While clean data (no skews or deviations) is the dream of every engineer, however, even with the cleanest data it becomes crucial to question different aspects of it. And, if there is a fail part, then finding the root cause is a must. This process requires sophisticated data exploration tools that bring efficiency. These tools should also be able to connect back to any historical/relevant data that can answer any deviation or miss alignment with new data. If data cannot answer all the questions, then comes the interesting part of putting together the root cause analysis plan. All this is not only a time-consuming process but also demands costly resources.

    Visualization: Analysis and visualization go hand in hand. However, not all the tools are great at both the analysis and the visualization part. This pushes semiconductor data engineers towards exploring the data using different tools. In the majority of the cases, these tools are procured from the software data industry. But it also happens that the companies are willing to invest internally to come out with an easy visualization technique that can provide information as soon as the data is ready. This does require a dedicated team and tools that require capital.

    Monitoring: Monitoring is another aspect of semiconductor data. It can be about the process steps involved during the semiconductor fabrication or about all the equipment being used for semiconductor product development. Each of these data points has to be monitored in real-time to ensure that there are no miss-steps during the fabrication or testing part of the product development. The environment required to set up monitoring and capturing monitored data again demands investment.

    Storage: Given how time-consuming and costly the process to generate the semiconductor data is, it is very vital to ensure that every data point is stored for long-term usage. The next product development might 9from nowhere) require data from different products to establish a scale or reference. Doing so is only possible if the data is stored in a standard process and is easily retrievable. This is also the major reason to invest in long-term semiconductor data storage servers, which indeed requires long-term investment too.

    In today’s data-driven world, semiconductor companies have to invest in the resources required to drive each of the steps in the cycle. Capturing, analyzing, and storing data in the world of semiconductors is more vital given how difficult and time-sensitive the product development process is.

    Without thoroughly analyzing the real silicon data, it is impossible to conclude whether the product is going to be a success or not. This is why data is one of the major building blocks of the semiconductor industry.


    Picture By Chetan Arvind Patil

    Picture By Chetan Arvind Patil

    THE REASONS FOR COSTLY SEMICONDUCTOR DATA

    The life cycle of semiconductor data presents a clear picture of the resources required to capture the relevant semiconductor data points. As the industry moves towards much smaller and advanced technology-node along with innovative package technology, the cost associated with it will also be on the rise.

    All these developments will have a major impact on the resources required to capture the semiconductor data, as each new semiconductor technology development will demand upgraded FABs and OSATs along with all the support tools and other hardware resources to equipment.

    Below is the major reason why the costly semiconductor data is here to stay and how it is also impacting today’s node process and packages out in the market.

    Equipment: The different sets of tools and equipment required to drive the successful development of a semiconductor product is key to generating relevant data. Each new design methodology, technology node, and packaging process demands new sets of equipment. This adds to the cost of development and leads to FAB/OSAT upgrade or expansion. This added cost is necessary to ensure that the semiconductor data can be generated and analyzed successfully. Thus showcasing clearly why the semiconductor data cost is on the rise.

    Data Tool: The raw data is the first step towards analyzing how the product is behaving on the real silicon. To take a step further, investment is required to procure advanced data analytics tools. The feature-based subscription cost associated with it is also on the rise and is heavily impacting the data analysis part. On top of this, every other year there are a new set of programming solutions that pushes semiconductor data engineers towards a new way of analyzing data. This also requires investment not only in the tool but also in the training and skill upgrade part of it.

    Skills: To make the most of the data also demands skills that take years to master. In today’s day and age, the explosion of new technology (on the software side) is also pushing engineers to capture new skill sets on the go. This requires companies to not only invest in core product development resources (FAB to OSAT) but also in people who can explore data with limited information and present the full picture.

    Resources: Apart from human resources the data also demands a unique support environment. This can be a factory setup that enables data generation to a data warehouse that is storing all the data-related information. Such resources require a dedicated knowledge team and tools. All the cost associated with such process ends up producing the relevant semiconductor data. Without resources, it is impossible to do any task (not just data exploration).

    Process: Technology-node to package to materials (and beyond) all go through a different life cycle and process. This process involves working in a dedicated lab that does require a unique set of tools. To ensure the process are right the tools have to be efficient and the combination of process and tools eventually leads to trustworthy data. The journey of capturing the semiconductor data is thus heavily dependent on these costly processes.

    Research: To drive next-gen FETs and XPUs, continuous research is required and it also demands data to validate the new technology/solution. This means a dedicated setup/lab with the next-gen process, equipment, and data tools. All this adds to the costly process of generating the data for research and development activities.

    The journey of semiconductor data is very interesting and high-tech. It certainly involves a lot of processes and steps that are dependent on different facilities, equipment, and human resources. As long as the goal is to come up with a new silicon solution, all these semiconductor resources will keep demanding high investment, and in the long run, it is the right thing to do.

    The growing importance of semiconductor solutions in every aspect of life is also raising the question as to whether the semiconductor data is the next-gen oil.


  • The Semiconductor Memory

    Photo by Stef Westheim on Unsplash


    THE ROLE PLAYED BY THE SEMICONDUCTOR MEMORY

    In the world of computing, memory is a vital piece of silicon that can either make or break the computer system. It is nearly impossible to perform any computing task without the help of computer memory.

    Memory is also the major reason why today we have large data storage centers and high-speed computers. There is no denying that software plays an important role in speeding up the application. However, the balance of software with the right hardware configuration is also a crucial part of making an efficient system.

    The portable computers are a perfect example of how important semiconductor memory is. A decade ago, it was unimaginable to have GB of RAM and storage. The advanced and continuous innovation has brought down the memory cost significantly. This has enabled smartphone manufacturers with computing resources that have only improved the processing capabilities. As the world becomes more hyper-connected, the role played by semiconductor memory is only going to be more vital than ever.

    In the last few years, there has been a shake-up in the semiconductor memory business where the number of top players providing memory products is shrinking year on year. In the long run, this might have major consequences due to greater dependency on specific players. Still, from the technical front, the semiconductor memory development will keep playing a supportive role in providing processing support on the go.

    The role played by semiconductor memory has only increased over the last few years, mainly due to the proliferation of advanced data computing.

    Data: Modern computer applications are rich both in terms of user interface and user requirements. The consumer today expects the request to be handled in the shortest time possible and that too without impacting the battery life or consuming more power. Whether it is a data center or a portable device like a smartphone, memory is playing a key role in handling the data request efficiently. While the major task is also done on the operating system and application side, the memory itself is also leveraging device-level techniques to minimize the footprint.

    Parallelism: Another important role played by semiconductor memory is enabling parallelism. Hyper-threading has been around for decades. The decreasing cost and improving cache/storage have enabled better parallelism. In the long run, the total energy saved by minimizing the number of operations has only helped (due to larger memory) the overall performance and also the user experience.

    Latency: In any given computer system, there are different blocks and the data travels through all these internal blocks before the request can be processing correctly. This often leads to latency and impacts user experience. To ensure there is no delay in processing the data request, computer architects have been using memory as a way to optimize data processing. This has lead to different innovative XPUs that allow memory usage in a manner that reduces the latency a lot.

    Bottleneck: Avoiding bottleneck is another important reason why memory is one of the biggest pillars of an efficient XPU. The bottleneck can occur when multiple applications are racing to get the data processed. In such scenarios, it becomes important to cache the information closer to the processing units, and this often requires either a large amount of second or third level of computer memory or an efficient algorithm that can handle both the task efficiently without adding a bottleneck. It so often happens that the computer runs out of memory and has to scarifies one of the two requests. This is why it has become important to leverage semiconductor techniques to hold more data closer to the core processing unit, and this role is played by semiconductor memory.

    Irrespective of the purpose for which the computer system is being used, in the long run, semiconductor memory plays a key role in ensuring the user experience is never compromised. This is also the major reason why the world will keep innovating futuristic semiconductor-driven memory solutions.


    Picture By Chetan Arvind Patil

    Picture By Chetan Arvind Patil

    THE FUTURE OF THE SEMICONDUCTOR MEMORY

    The world is racing towards digitization and the foundation of this race was laid long back with the invention of modern hardware and software systems. This journey towards an automated and digitally compliant world will not be possible without the help of memory units that is designed and manufactured by the semiconductor industry.

    This is why the semiconductor memory solutions will keep playing a vital role in tomorrow’s applications and solutions used by the different semiconductor industries.

    5G: Next-gen wireless communication technology, 5G, is more data-driven solutions than its predecessor. This will lead to the deployment of different types of applications that will be data-hungry. To cater to all such requests, nearby memory stations will be required that can fulfill the request with the help of caching or any other secure communication solutions. 5G will also push data storage activity and this will eventually require more efficient and high-speed memory solutions.

    IoT: Internet-of-Things has been around for almost a decade. Even a laptop connected to the wireless internet can be called an IoT device. However, the 5G driven by Android and low-cost consumer electronic products, will speed up the use cases of smart and tiny devices. This will drive the need to optimize memory performance, which will also minimize energy consumption.

    Factories: From automotive to aerospace, the factories are becoming more autonomous and smarter. This is not new and has been the case in major parts of the world for few years. The smarter factory concept is going to demand more robust and secure memory that is hack-proof. This is another area where semiconductor merry solutions will play a critical role.

    Smart Devices: Smart devices are not just about smart cameras or drones. Any given device that is capable of delivering smarter solutions while not consuming more power than its predecessor has the right to be called a smart device. While applications and software also play a key role, the memory requirement for such devices is also going to have a major impact on making future smart devices even more efficient.

    Mobile: Mobile is not about smartphones, it is more about any device that enables mobility. It can be a cell phone, laptop, car, or even drone. All these devices eventually require a high-speed and highly reliable memory system that can work without any issues for years. Another reason why semiconductor memory is important in the long run.

    Data Center: Catering to data requests remotely also requires data centers that can hold a large amount of data. This task is impossible without the storage requirement and it often means investing in racks that can hold a large amount of semiconductor memory.

    Space Exploration: Launching satellites and sending rovers to different planets also requires different semiconductor solutions. One of the key pieces is memory due to lag in real-time communication. This requires the remote satellite/rover to store data locally till the data delivery confirmation from the space station is received. Such critical missions demand the most reliable memory solution possible and this is going to keep pushing the semiconductor industry towards innovative memory requirements.

    Autonomous: Autonomous world is exploding. Whether it is traffic management, inventory optimization, or maintaining a large warehouse, all the places are getting heavily automated. This requires algorithms to run on the machine at the edge. This demands a good combination of processing capabilities along with memory management. This is another area where semiconductor memory will play a supportive role by allowing over-the-air updates and software optimization.

    There is not a single area where semiconductor memory is not playing a key role. Where ever there is a smart system running software, there is also a memory that is acting as a catalyst.

    With the invention of new capacitor technology and the continuous development of new memory nodes, the semiconductor industry will see transformative memory products and solutions for the next few decades.


  • The PPA Management In Semiconductor Product Development

    The PPA Management In Semiconductor Product Development

    Photo by Christian Wiediger on Unsplash


    THE IMPORTANCE OF PPA IN SEMICONDUCTOR

    Semiconductor products are designed and manufactured for different conditions with varying requirements. These conditions and requirements are often a combination of several technical criteria.

    One such important criteria are Power, Performance, And Area (PPA).

    In the end, the goal of developing semiconductor products is to provide as much functionality possible. This requires a perfect combination of PPA: Low power consumption with high performance in the smallest area possible.

    The shirking transistor size has ensured that the die/chip area is not a technical concern when designing a semiconductor chip. However, at the same time, other technical challenges are posed by small die/chip areas. Mainly, balancing the power consumption while not affecting the performance.

    Three-way balancing act of power, performance, and area (PPA) is becoming more challenging when the semiconductor products are used for applications that demand lower die/chip areas while also expecting higher performance. With decreasing die/chip area and increasing performance, the management of the total power consumption (static and dynamic) also becomes an uphill task. This leaves designers with limited knobs to play with. This is why considering PPA is important when it comes to developing semiconductor products with advanced technology-node.

    There are four major factors that PPA can have an impact on:

    Efficiency: There is not a single semiconductor product that is designed and fabricated to perform tasks inefficiently. The only goal of a smart semiconductor chip is to provide maximum efficiency. While 100% efficiency is not possible, the goal of PPA is to ensure there is minimal negative impact on the battery (given majority of the electronics system run on a portable battery system), and this is achievable only when the budget (during the design phase) takes into consideration how the performance and power scheme will be for a given die/chip area. There on building the full chip design becomes a more well-laid-out task.

    Latency: Larger the die/chip area the slower the data traffic. This is more valid for XPUs, where N number of cores are working in synchronization to achieve the single task of crunching the data in the faster possible time. If the area is large and the layout is not optimized, then the latency introduced will be higher. On the other side, a large area (or even smaller in some cases) also has a far greater impact on total power consumption, while the performance is mostly on the positive side. This is another reason why balancing PPA becomes a critical task in semiconductor product development.

    Thermal: Smaller the die/chip area, the less room there is to transfer the heat out of the system. This also leads to more static power consumption apart from the dynamic. On other hand, smaller area also requires advanced technology nodes that eventually mean higher junction and package temperatures (apart from skin temperature). This eventually demands smart dynamic thermal management techniques, which are only possible if PPA is managed efficiently.

    Cost: In the end, the goal of any product (not just semiconductor product) is to optimize the cost of development. PPA plays a crucial role in cost too. Increasing die area can mean less room for more dies on the wafer, which means more wafers to product higher volume, and this eventually leads to higher development cost. This is another reason why PPA is an important factor when it comes to increasing product margin.

    At the end of the day, the ultimate goal of the semiconductor product is to provide solutions that not only fit the market but are also the best version in the given category.

    This is where optimizing PPA is vital, as it ensures the different functionality of the given die/chip is geared towards a product that outperforms any other competitor in the market.


    Picture By Chetan Arvind Patil

    Picture By Chetan Arvind Patil

    THE PPA BOTTLENECK IN SEMICONDUCTOR

    Designers worldwide are always working towards the goal of achieving the required specification. This allows them to ensure that the semiconductor product is meeting all the criteria for the system it will eventually become part of. However, there are always design constraints, and PPA is one such vital constraint.

    In reality, it is difficult to create a perfect balance of all the three components of PPA. One of the other parameters will always outweigh the other. This is more valid for critical semiconductor components like XPUs, which often demands less die/chip area for high performance.

    Still, there are PPA driven bottlenecks that may hinder the success of the product:

    Technology-Node: Balancing PPA does require choosing chip development technology that covers not only the product’s technical requirement but is also not costly to manufacture. Post design, the technology-node is going to stay with the product till the end of the product’s life. This is why PPA can many times drive technology combination choices that may not always be advanced. This may or may not have a major impact on the product’s success, however, PPA certainly adds constraint on the choice of technology-node.

    Intellectual Property: The semiconductor design is getting increasingly driven by IP. This can be a bad news for the next-gen chip design as every new IP block might already have its PPA budget/scheme. This hinders the ability to play with the chip’s overall budget/scheme. This is why IP can sometimes introduce PPA bottlenecks in the chip designing process.

    Memory: Memory is one of the most critical pieces of block in any given modern chip. More so when the chip is designed for workload-intensive tasks. The unpredictable number of reads/writes can through away the PPA budget for any given product. In such scenarios, it becomes difficult to count on the PPA budget scheme and often requires millions of simulations to validate the PPA scheme. This leads to bottlenecks on the design schedule and adds pressure to validate all the possible read/write scenarios.

    Interconnect: If the area component of PPA has large say in the overall budget, then it can often lead to interconnected block systems that can introduce a lot of data traffic. This can have a heavy impact on-chip performance. This is often true for XPU based semiconductor chips. This is another possible way in which PPA introduces bottlenecks into the system.

    As the semiconductor industry moves towards more advanced FETs, the importance of PPA will grow too. It can either lead to PPA schemes allow chips to outperform their predecessors or it can also have a negative impact (due to unbalanced PPA schemes). This is also one of the major reasons why new FETs and silicon chips are primarily focused on PPA to showcase the positive features of their new solutions.

    In the long run, as newer FETs and technology-nodes get developed, both the semiconductor design and the manufacturing process will keep dealing with the act of balancing the PPA.


  • The Semiconductor OSFAB

    The Semiconductor OSFAB

    Photo by Patrik Kernstock on Unsplash


    THE ROLE OF OSFAB

    In the majority of the industry, outsourcing enables a way to operate efficiently. The efficiency is achieved both from a technical and business point of view.

    In the software industry, outsourcing is primarily focused on providing the right tools and services required to drive internal day-to-day operational activities efficiently. This allows the customer (companies) to instead focus on their core business. The same outsourcing strategies are applicable in the hardware industry, in some cases more than it is in the software industry.

    The core business of the semiconductor chip design companies is to come up with designs that allow them to create products for their niche market. In many cases, semiconductor companies often have to compete with others to win the business. To drive winning strategies, no matter what, the semiconductor companies have to focus on the manufacturing process. Without manufacturing and delivering samples on time, there is no way to win the market. This is why companies without in-house semiconductor fabrication facilities (FAB-LESS) have to heavily rely on OSFAB.

    Outsourced Semiconductor Fabrication (OSFAB) is not new to the semiconductor industry. Companies like TSMC, Samsung, and alike have been providing OSFAB services to the semiconductor industry for a long time. In doing so, these companies have created a niche market for themselves. And, over the years as the OSFAB business has grown, they have also added the required capacity. Another advantage OSFAB provides to FAB-LESS companies is the option to choose from a large pool of technology-node and industrial flow options. This allows FAB-LESS (and in some cases also to IDMs) a way to optimize and allocate products to different OSFAB.

    Even though OSFAB has been critical to the semiconductor industry, there seems to have a growing reliance on specific OSFAB companies. If the trend continues, then there will not only be a shortage of OSFAB (due to growing semiconductors in different products) but the dependency might harm the semiconductor companies without any internal FAB capacity.

    Recently, Intel announced IFS (Intel Foundry Services), which will open up Intel’s FAB capacity to the outside world. This is a welcome change in many aspects. Foremost, it will put pressure on companies that have dominated the OSFAB arena. It will also drive new manufacturing solutions (devices, FETs, AI-driven automated processes, etc.) that will eventually help the semiconductor industry.

    Intel’s years of design and manufacturing experience will also have an impact on the cost and capacity strategies that many of the FAB-LESS often have to focus on.

    Cost: The top FAB-LESS companies are well capable of spending and building internal FAB capacity. However, they do not do so due to the added CapEx and operating cost. With Intel joining the OSFAB business along with TSMC, Samsung, and others, the ability to optimize cost will only increase. This cost can be from taking advantage of Intel’s process node that is different (and maybe in some cases better) than its rivals but is low on cost. FAB-LESS companies can also deploy strategies to prioritize products based on the time-to-market and thus evaluating non-critical products at the new OSFAB to capture how much cost optimization can be achieved.

    Capacity: The worldwide OSFAB capacity increase due to Intel Foundry Services will provide FAB-LESS with options to choose from and thus will allow them to allocate products to different OSFABs. This will take away the pressure of planning years in advance and also ensuring that there is no dependency on specific OSFAB. On top of all this, newly added capacity also provides companies with an option to choose the OSFAB that is more supply chain friendly.

    OSFAB business is going to heat up more in the coming years. In the next couple of years, TSMC, Samsung, Intel, and others will be competing against each other and this will only allow FAB-LESS companies to leverage the best semiconductor manufacturing solution in the market.


    Picture By Chetan Arvind Patil

    Picture By Chetan Arvind Patil

    THE IMPACT OF OSFAB

    Irrespective of how Intel’s new Foundry Services shapes up the semiconductor industry, there is certainly a positive impact of OSFAB. Since the OSFAB business is primarily focused on providing cutting-edge solutions to the semiconductor design houses, the internal research and development activities end up providing nice solutions to the market.

    OSFAB without having to focus on the design aspect of any product they manufacture, end up spending a lot of time and money in perfecting the manufacturing process. This eventually pushes the industry towards next-gen products that are more efficient and at the same time powerful enough to meet the target application needs.

    To summarize, there are four major aspects (mix of technical and business) that OSFAB drive:

    Competition: The more OSFAB options are there in the market the better it is more the end-customer i.e. FAB-LESS. FAB-LESS companies get an edge to choose from different capacity that is available in the market. From the OSFAB point of view, it is a good scenario too as it pushes competition and it ends up driving new products (FETs/devices) that can be vital in attracting a new customer base. If there is no competition and the only couple of big players have all the OSFAB capacity, then the pace of innovation will slow down. This is why Intel’s decision to open up its FAB is going to not only add pressure on the OSFAB business but will also push OSFAB towards newer FET devices.

    Quality: OSFAB focuses primarily on the manufacturing aspect of the semiconductor. Doing so allows them to deploy strategies and solutions that ensure defect-free products. In a semiconductor, where the room for error is next to none, maintain high quality is the topmost criteria. The quality control can come from an error-free fabrication pipeline or by deploying equipment/tools that capture defects early in the fabrication line. OSFAB has played a critical role in enabling such a solution.

    Option: Having OSFAB also provides options to FAB-LESS companies to mix and match the FAB with different OSAT. This allows them to diversify key products from a supply chain point of view. Depending on specific FAB/OSAT can have a negative impact. With growing OSFAB capacity, it is becoming a more business-friendly option for FAB-LESS companies, as it will allow them to plan products and in many cases execute them in parallel instead of waiting for capacity to open up.

    Efficiency: Having more OSFAB options (along with new capacity added by TSMC, Intel, and even Bosch in automotive) ensures there is never a shortage of FAB to choose from. If OSFAB X is fully occupied then the FAB-LESS can certainly take advantage of OSFAB Y. In many cases, going for newer OSFAB can bring better (and future capacity) options. This frees FAB-LESS companies from planning and securing capacity and instead puts energy on how to execute products for the market.

    Both OSFAB and OSAT play a crucial role in bringing design to life. Eventually, adding more capacity is only going to drive the competition and also provide FAB-LESS with more options. It is also a vital time for countries without OSFAB to start building one today (at least for internal consumption).

    Hopefully, Intel’s new announcement is only going to have a positive impact and will push the semiconductor industry towards newer semiconductor manufacturing solutions.


  • The Cloud Is Changing Semiconductor Industry

    The Cloud Is Changing Semiconductor Industry

    Photo by Kvistholt Photography on Unsplash


    THE IMPACT OF CLOUD IN SEMICONDUCTOR INDUSTRY

    In the high-tech industry, software plays a crucial role. The ability to manage the task with a click of a button has a profound effect on different day-to-day activities. Software-driven digitization is also a key enabler in achieving productivity, which in turn allows companies/industries to capture the market worldwide.

    Over the last few decades, the software delivery model has evolved. From desktop to browser to smartphone apps to cloud, the different software delivery models have increased productivity. Industries worldwide have realized the potential of software-driven solutions and have rightly adopted strategies to maximize software deployment via the cloud. The same is true for the semiconductor industry.

    An advanced industry like semiconductor, where a single mistake can amount to zillions of dollars in losses, certainly needs solutions that can help plug any gaps in the product development process. These solutions can range from both the technical and business aspect of the product life cycle.

    To achieve high design and manufacturing standards, the semiconductor industry also has been heavily relying on cloud-based software solutions. This is why over the last couple of decades all major segments of the semiconductor industry have adopted cloud strategy.

    There are certainly cases where the transformation is happening today, but irrespective of the digital transformation journey, semiconductor design, and manufacturing houses realize the importance of hoping on to the cloud solutions and take advantage of every detail by capturing and connecting all the possible data points.

    There are numerous ways in which cloud strategy is impacting the semiconductor industry:

    Optimization: To track all the project and product details, cloud-based solutions play a vital role. The tools provide an ability to capture every change (technical and business) and ensure that the information is available on the go. This way of capturing the details allows optimized operations that ensure that the design to manufacturing flow is well connected to the specifications and the execution flow.

    Defect And Error Free: By making use of smart and connected equipment with cloud-powered data analytic tools, semiconductor companies can ensure that the end product is defect-free. Deploying smart cloud tools that can capture gaps in the process/recipe ensures that there are no errors in the manufacturing flows.

    On-Time: Delivering products on-time to the customer is key to capturing the large market, and it often requires swift coordination between multiple cross-functional teams. To connect and share the information seamlessly needs cloud-powered data-backed solutions that can track every detail from forecasting to material handling. Capturing the minutest of the details ensures that the product reaches the end customer on time.

    End-To-End Process: Semiconductor industry is built on top of several segments, which includes the design houses to equipment manufacturers to FABs to manufacturing OSAT sites, and many more. All these individual segments should be connected end-to-end to provides a holistic view of how the product development, manufacturing, and delivery flow. This is where cloud-based solutions are useful, and such solutions also bring transparency.

    To create a robust and high-quality semiconductor product, numerous data points are required. This is possible only with the help of tools and solutions that connect and provide a detailed view of different stages of the product development cycle. This is why a cloud-based end-to-end solution is heavily used in the semiconductor industry.

    As more FABs and OSATs get established in different parts of the world, the cloud strategy should be one of the top priorities. Doing so, will only increase productivity and enable a better customer experience.


    Picture By Chetan Arvind Patil

    Picture By Chetan Arvind Patil

    THE USE OF CLOUD IN SEMICONDUCTOR INDUSTRY

    Cloud-based solutions are used at every step of semiconductor product development. The efficiency and productivity such tools bring to different stages of product development, ensures that the end-product is defect-free.

    Different segments of the semiconductor product development require a unique cloud strategy. These can range from specific EDA tools to massive data storage instances. In the end, all these data points are connected/tied to a specific product. This way any team from anywhere can access the needed data/information to execute the task at hand.

    Research And Development: Designing devices to form the circuits and layouts that will eventually get fabricated into a silicon product, is the first step towards developing a turnkey semiconductor solution. For several years, the semiconductor industry has relied on software-based solutions. In the last decade, the same software has moved from desktop to cloud. This has enabled designers and researchers to access files and libraries on-the-go. On top of that, it has taken away the need to have high-performance systems that were earlier required to carry out tons of simulations. Today, one can simply deploy and get on with other work while the simulations are running on the cloud. Using the cloud, companies can ensure that there is no IP infringement as the check can run on the fly and tools can compare design solutions against the massive poll of cell libraries. All this ensures that the right product is developed by following the industry-standard protocols.

    Data Analysis: Data generation occurs at every stage of silicon development. Whether it is simulations in our design stage or fabrication/testing in FABs/OSATs. This is why advanced statistical analysis is carried out on every data point that is generated. Doing so on the fly means deploying solutions that can run closer to the data generating source. Also, if the data source is error-free, then data engineers can question it and take a deeper look by using analytical tools. In both scenarios, highly advanced tools are required. This means making use of cloud-based solutions that are easy to access and are loaded with features to enable accurate data exploration.

    Supply Chain Management: Delivering products to the end customer often requires connecting several dots from different systems. This is why supply chain management is needed. It ensures the product and its processes are tracked using the unique bill of materials. This often requires relying on specific cloud-based tools that can swiftly retrieve any information related to the products to provide its full history from inception to delivery. Such a task without cloud tools will certainly invite errors.

    Market Analysis: Capturing market trends to understand which products will give maximum profits is key to success. This often requires capturing different data points and then aligning the product roadmap as per the market (which means the customer) requirement. This ensures that the CapEx is diverted towards high revenue products. Such planning and projection are not possible without capturing different data points and customer developments. This is where cloud-based market analysis comes in handy and ensures that the projects are profitable.

    Resource Management: Semiconductor equipment, tools, and labs require high CapEx and such investment is viable only if there is a high ROI. To achieve positive ROI, the resources need to be looked after and that requires efficient management. This means periodic maintenance to ensure minimum downtime. For such management, cloud-based solutions are deployed so that whenever tools go down or require maintenance, the data can be captured to raise the alert beforehand.

    Factory Operation: Running FABs and OSATs 24×7 is key to ensuring that the facilities breakeven as quickly as possible. This requires capturing second to second activity of every corner of the factory. This is possible only when a connected system is deployed that can raise alarm in case of downtime and also provide remote status of every piece of equipment and tools. Cloud-based solutions already play a key role in this can thus are heavily used by semiconductor FABs and OSATs.

    Logistics: Shipping is a big part of the semiconductor product development cycle. The wafers often come from a different outside supplier and then get fabricated at a different location, only to get tested in a different part of the world. This means optimized logistics and real-time tracking of material. While the majority of the logistics solution providers are heavily using cloud-based solutions to track and deliver packages, it so often happens that the semiconductor companies also have to invest internally in systems that can raise shipping labels apart from tracking where the customer deliveries are. This is why cloud-based logistics solutions are handy for semiconductor companies.

    Archiving: Saving every data point related to the product is vital. This data ranges from design files to test data to financial records. All these need to be stored for a very long term so that whenever a query or comparison has to be done, the data can be easily retrieved. Quick retrieval and analysis are only possible if cloud solutions are deployed.

    The use of the cloud in the semiconductor industry will keep growing. In the next few years, more sophisticated and smart tools will be deployed. Factory operations are already utilizing high-tech cloud solutions for product fabrication. Design and Supply Chain Management are not behind. The cloud market for the semiconductor industry will only keep growing with more new players entering the product development every year.


  • The Pillars Of Semiconductor Industry

    The Pillars Of Semiconductor Industry

    Photo by Tanner Boriack on Unsplash


    THE IMPORTANCE OF PILLARS IN THE SEMICONDUCTOR INDUSTRY

    There are numerous industries worldwide catering to different markets. Company from a specific industry will rarely provide all end-to-end in-house solutions to develop products.

    Almost all of the companies within a given industry rely heavily on different segments (from the same industry) to achieve the goal of producing high-quality products for their target customers.

    The same fundamentals are valid for the semiconductor industry. Without the support environment, the product will not meet the high-quality customer’s expectations. More so, when the silicon chip can be used for numerous types of applications anywhere in the world. That is why quality and reliability need to be above par.

    To satisfy customer requirements, different types of support systems are needed. In semiconductors, these support systems can be called pillars. These pillars of the semiconductor industry can be logically separated into three different segments

    Front-End: Research, design, and marketing.

    Middle-End: Support, equipment, and software.

    Back-End: Manufacturing, supply chain, and sales.

    Each of these three segments plays a crucial role in bringing a product to the market. They do so to provide the following important traits:

    Cost: Semiconductor companies are immensely focused on cost reduction to grow their product margin. This is something that requires three-way synchronization of the three discussed segments. These segments to do by bridging together the gaps that ensure zero-delay and zero-waste. Such practice ends up creating low-cost but high-quality flow for the semiconductor product development.

    Time: Delivering products on time is the key. Delays in projects can hurt a company’s reputation and growth. This requires all three segments to work in harmony. Using the data-sharing approach, different segments work together to ensure all possible delays are eliminated, thus ensuring time-bound delivery.

    Growth: Connecting the three segments seamlessly, ensures that the product is profitable in the long run. Ensuring there are no gaps between the three segments enables a high growth margin too.

    Quality: Working with different segments within the same industry also requires efforts to maintain high-quality and standards. This is what these three different segments work towards by ensuring different tools and processes come together to provide reliable products.

    Innovation: In the end, companies in semiconductors are focused on innovating next-gen solutions. This is possible only when all three segments are set up and connected to drive innovation, which often requires a high level of operational management, something Front/Middle/Back segments have incorporated for a long time.

    The three different pillars of the semiconductor industry play an important role when it comes to product success and profitability. However, it takes a lot of effort and time in building a network of solution providers that form a robust three-way network.


    Picture By Chetan Arvind Patil

    Picture By Chetan Arvind Patil

    THE ROLE OF DIFFERENT PILLARS IN THE SEMICONDUCTOR INDUSTRY

    The semiconductor industry views Front-End, Middle-End, and Back-End from a technology-node or semiconductor fabrication point of view. However, it is about time that the world also sees these three segments from a business point of view.

    Each of these three segments is vital in ensuring that the product is delivered to the market on time.

    Front-End: Front-End segment captures customer/market requirements which are used by teams to design a product. This is the research and development part that brings product ideas to reality. It can also be seen from a marketing point of view as a stepping stone towards next-gen products.

    Middle-End: Middle-End segment connects the design and marketing side of the product to the back-end side that delivers the fabricated product to the end-customer. It also consists of different support environments like software, equipment, facilities, and so on.

    Back-End: In this segment, the product comes to life. It is more manufacturing, logistics, and the supply chain side that ensures the design reaches the market in time. Back-End is also about working with different assembly teams to ensure the semiconductor product is mounted on the system works as per the specification.

    It might be confusing to the industry veterans who have only focused on Front/Middle/Back end systems from a fabrication point of view, but as the world is moving towards more connected product development it is equally important to understand how and which segments within the industry work in synchronization to enable advanced technology.

    Viewing the end-to-end semiconductor industry chain from the above three pillars gives a more realistic view of how the semiconductor industry works and the importance each of these segments/pillars brings.


  • The Role Of Root Cause Analysis In Semiconductor Manufacturing

    Photo by Mathew Schwartz on Unsplash


    THE IMPACT OF SEMICONDUCTOR ROOT CAUSE ANALYSIS

    Delivering high-grade products is the ultimate goal of every manufacturer. To do so, different industry-specific standards and processes are used. The same applies to the semiconductor manufacturing industry.

    In semiconductors, as the technology-node shrinks, the cost to manufacture tiny silicon increases too. To ensure there is no waste of materials, time, and cost, several different strategies are deployed to screen the part before it moves ahead in the fabrication/manufacturing line.

    One such strategy is root cause analysis. It so often happens, that the product being developed might encounter an issue during the qualification, testing, or packaging stage. Does not matter where the failure occurs, it is critical to understand the root cause. For an industry like semiconductor manufacturing, where every failing part can jeopardize not only the product quality itself but also the system to which it will eventually get glued in to. Such a scenario can certainly lead to millions of dollars in losses apart from a negative business reputation.

    This is why holding on to every failing product and carrying out detailed root cause analysis is one of the major pillars of the semiconductor industry.

    Over the last few decades, as the devices are getting tinier than ever, the importance of root cause analysis is growing due to the several impacts it has on the product:

    Cost: Finding the root cause of why the product failed, empowers the team with data points to take necessary actions. It so often happens that the root cause analysis is more about setup or lab where the qualification or testing is being carried out and has nothing to do with the product itself. All root cause analysis and actions eventually can save cost by eliminating the need to re-do either the design or the qualification. Root cause analysis of failing products also ensures that the product will not fail in the field, thus saving years of investment.

    TTM: Given the stiff competition in the semiconductor industry, the ultimate goal of every design and manufacturing house is to ensure that the product is launched within the planned time frame. In case of failures, root cause analysis provides a way to capture any severe design or process issue early on. This allows course correction to ensure the product still makes it to the market in time.

    Quality: Qualification is an important process before the product gets released for production. Root cause analysis of any failing product during the development stage ensures that the product meets the high-quality industry standards. The standards increase as per the target domain and so does the importance of root cause analysis of the failing product/part.

    DPPM/DPPB: Defective Parts Per Million/Billion is an industry-standard metric. The goal of semiconductor manufacturing is to lower DPPM/DPPB. In case of field failures, root cause analysis comes in handy, as it ensures the severe fails are captured to lower the DPPM/DPPB further. More so, when the product/part will be used for critical applications like automotive or wireless communications.

    Root cause analysis has a major impact on semiconductor manufacturing. With new FETs and equipment launching every year, the role played by analyzing any smaller to big product-related failure is vital for both FAB and FAB-LESS companies.


    Picture By Chetan Arvind Patil

    THE STEPS OF SEMICONDUCTOR ROOT CAUSE ANALYSIS

    To capture the root cause that leads to the failure of a product, several steps are followed. Each of these steps plays a key role in providing data points to establish the root cause to drive defect-free manufacturing.

    Inspection: Any given part/product that fails first and foremost has to go through a detailed inspection. Depending on the state of the product (whether in wafer or die or assembled packaged form), the inspection is performed. The images generated by these are then used to establish if any non-technical cause (handling, stress, recipe issue, etc.) leads to the failure of the product. X-Ray and SEM are widely used as these processes provide internal product details.

    Data: The data points for the failing part/product come from different stages. All the parts have marking to trace their origin. As a starting point, the first data point is collected by testing the product without damaging it. Another data point can be inspection data that FAB and OSAT might have on the failing part/product. Apart from these two, several other data points like recipe error, handling issue, the material used and many more, are captured to establish whether the root cause can be determined based on the data itself.

    Testing: Testing can be done performed on: ATE and BENCH. On ATE, the part/product is tested to capture failing tests. This provides an early hint about which blocks of the part/product are failing. In case, there is no firm conclusion, then BENCH validation allows a way to find if the part indeed failed or there was a setup error. In many cases, different testing scenarios are also used to capture testing-related data points.

    Reproduce Failure: This is the basis of the root cause analysis. Once the failure mode is understood, then the setup is replicated with an exact error to understand if the part/product repeatedly fails or not. This often requires a good setup and good parts/product, with the only varying parameter being the setup to capture the failing scenario/setup.

    Localization: Root cause analysis can also lead to the discovery of issues in the product itself. This often requires the team to perform the root cause analysis to establish the location at which there are design issues. This is achieved with the help of all the above steps and most importantly using the equipment to show that when the part/product is biased, then it fails due to a hotspot found on the specific location of the layout. Advanced equipment are used to capture such data.

    Documentation: In the end, everything needs to be documented. Documentation is a major part of not only capturing the reasons for part/product failure but also enables cross-functional learning. In the future, the documentation can be used to minimize the efforts to establish root cause in cases where the failing trend is similar. 8D and other problem-solving methods are used to document detailed root cause analysis.

    Finding the cause of failure is like finding a needle in the haystack. The complexity the products are bringing, (due to advanced technology-node) is making root cause analysis an important and also difficult task at the same time.


    Picture By Chetan Arvind Patil

    THE PILLARS OF SEMICONDUCTOR ROOT CAUSE ANALYSIS

    Performing root cause analysis is certainly a team effort. Experienced people play a vital role in reducing the time it takes to establish the root cause.

    However, there are two major pillars without which root cause analysis is not possible. These pillars also require massive investment to establish and often have a high operating cost.

    Lab: A dedicated space is required where experienced engineers can take the failing part and establish the root cause by biasing the part in a controlled environment. This requires skill-based training and the ability to understand any failing product/part that comes into the lab.

    Equipment: To run a root cause analysis lab, advanced equipment are also needed. These equipment can perform detailed inspection, apart from carrying out different setups/testing to reproduce the failure.

    The majority of the semiconductor companies have an in-house dedicated lab to perform root cause analysis. Some companies prefer to outsource. Eventually, it is all about managing resources and lowering the cost without compromising on the DPPM/DPPB.

    As the industry moves towards more complex products that will power everything everywhere, capturing the root cause of every failing product during development or the production stage, will become more vital than ever.