Category: TECHNOLOGY

  • The AI World Beyond Semiconductor GPUs

    Image Generated Using Adobe Firefly


    Graphics Processor Units (GPUs) have become the default go-to architecture whenever the requirement is for faster throughput. The primary reason is the massively parallel processing with the help of many cores and how the memories get organized around them.

    Due to the benefits of such an architecture, the AI World has also adopted GPUs as its go-to silicon architecture. The goal is to process large amounts of data in the shortest time possible; other technical reasons are reusability and portability, which lower the entry barrier for new companies developing large-scale AI solutions.

    Several semiconductor companies provide GPU solutions. However, NVIDIA is winning the GPU race so far, and the main reason is the near-perfect software-to-silicon ecosystem it has created. It enables new and existing customers to adapt to the latest GPU type swiftly and also new AI Frameworks, all while keeping the reusability and portability cost under check.

    What does not work in favor of GPU architecture:

    Availability: GPUs (mainly from NVIDIA) are inching towards 3nm. There will be a race to capture the available worldwide capacity with only one Pure-Play vendor capable of producing yieldable silicon chips. It will take a lot of work to capture the required demand-drive power.

    Cost: GPU will start adopting ultra-advanced (3nm and lower) nodes. The cost of designing and manufacturing these silicon chips will increase further. More so when GPUs are yet to find a way out of the die-level solution to a more More-Than-Moore (MtM) path. In a year or two, GPUs designed for AI workload will surely reach the reticle limit, which even EUVs cannot support.

    Not Application-Specific: GPUs are still general-purpose in terms of application requirements. The SIMD, MIMD, Floating, and Vector level translations usually only fit some requirements. Conversely, the AI developers (mainly large-scale software companies) will keep seeing the need for more application-specific (thus why TPUs came into existence) architecture that can provide a solution-level GPU.

    Deployment: Deploying stacked GPUs is like bringing up a massive farm. It increases the cost of operating such data farms. On top of that, the more powerful the GPUs are, the more influential the applications become. Thus, increasing the data processing request leads to more performance and energy consumption.

    Sooner or later, even GPU architecture will reach a state where they may not be the first choice for AI. Currently, the software industry (or the AI industry) relies on the GPU architecture primarily due to the mega data centers using these and being the best broadly deployed architecture in the market.

    However, as more new types of AI usage and requirements arise, the software industry will realize that the GPU architecture is unsuitable for their applications. Thus, there is a demand for more customized performance-oriented silicon architecture.


    Picture By Chetan Arvind Patil

    The need for customized AI silicon architecture has already caught the eyes of both the software and silicon industry. It is leading to more silicon-level solutions. That can replace or give GPU architecture robust competition.

    There is a specific type of silicon architecture that has the potential to replace GPUs shortly. Below are a few:

    Wafer-Scale Engine (WSE) SoCs:

    The Wafer-Scale Engine (WSE) represents a paradigm shift in computing, indicating a new era where traditional GPUs get replaced in specific applications. Unlike GPUs that contain thousands of small processors, a WSE is a single, giant chip that can house hundreds of thousands of cores capable of parallel processing. This architectural leap enables a WSE to process AI and machine learning workloads more efficiently due to its vast on-chip memory and reduced data transfer latency. By eliminating the bottlenecks inherent in multi-chip approaches, a WSE can deliver unprecedented performance, potentially outpacing GPUs in tasks that can leverage its massive, monolithic design. As AI and complex simulations demand ever-faster computational speeds, WSEs could supplant GPUs in high-performance computing tasks, offering a glimpse into the future of specialized computing hardware.

    Chiplets With RISC-V SoCs:

    Chiplets utilizing the RISC-V architecture present a compelling alternative to conventional GPUs for specific computing tasks, mainly due to their modularity and customizability. RISC-V, being an open-source instruction set architecture (ISA), allows for the creation of specialized processing units tailored to specific workloads. When these processors adopt chiplets (small, modular silicon blocks), the larger chip can be manufactured into a coherent, scalable system. The computing system gets optimized for parallel processing, similar to GPUs, but with the added advantage of each chiplet being custom-crafted to handle particular segments of a workload efficiently. In scenarios where energy efficiency, space constraints, and specific application optimizations are paramount, RISC-V chiplets could feasibly replace GPUs by providing similar or superior performance metrics while reducing power consumption and increasing processing speed by tailoring the hardware directly to the software’s needs.

    Tensor Processing Units SoCs:

    Tensor Processing Units (TPUs), application-specific integrated circuits (ASICs) designed for machine learning tasks, offer a specialized alternative to GPUs. As System-on-a-chip (SoC) designs, TPUs integrate all the components needed for neural network processing onto a single chip, including memory and high-speed interconnects. Their architecture is tuned for the rapid execution of tensor operations, the heart of many AI algorithms, which enables them to process these workloads more efficiently than the more general-purpose GPUs. With their ability to perform a higher number of operations per watt and their lower latency due to on-chip integration, TPUs in an SoC format can provide a more efficient solution for companies running large-scale machine learning computations, potentially replacing GPUs in data centers and AI research facilities where the speed and efficiency of neural network processing are crucial.

    PIM SoCs:

    Processing-in-memory (PIM) technology, particularly when embedded within a System on a Chip (SoC), is poised to disrupt the traditional GPU market by addressing the ‘memory wall’ problem. PIM architectures integrate processing capabilities directly into the memory chips, allowing data computation where it is stored, thereby reducing the time and energy spent moving data between the processor and memory. As an SoC, integrating PIM with other necessary system components can lead to even more significant optimizations and system-level efficiency. In applications such as data analytics, neural networks, and other tasks that require rapid, parallel processing of large data sets, PIM SoCs could potentially outperform GPUs by leveraging their ability to bypass the data transfer bottlenecks that GPUs face, delivering faster insights and responses, especially in real-time processing scenarios.

    One factor all of the above solutions need to success is the software ecosystem that AI developers can rely on. All new solutions do require a level of abstraction that can make it easier to adopt. So far, with the CUDA ecosystem and optimized AI frameworks around CUDA, NVIDIA has aced this domain.

    Like the CPU domain, the GPU domain cannot be dominated by a selected few. Soon, there will be promising SoCs that can pitch themselves as the potential future of the AI World, which will also push GPU architecture innovation to its limit.

    The next five years will reveal how the “Silicon Chip For AI World” segment will evolve, but it certainly is poised for disruption.


  • The Ever-Growing Need For Semiconductor Yield Management

    Image Generated Using Adobe Firefly


    The manufacturing of semiconductor devices is a highly intricate process that involves multiple steps. Even minor deviations in these steps can result in defects that render the device non-functional. Semiconductor yield management is a process that aims to identify and address yield issues to ensure that the devices produced are of the desired quality and functionality. Its main goal is to maximize yield by minimizing the number of defective devices and optimizing the manufacturing process for better productivity.

    Often, yield management is assumed to be only about silicon test data. In reality, it is much more than that. It is about ensuring all the process steps from wafer start-up until the final packing stage is equipped with a yield monitoring process

    Semiconductor yield management requires specialized tools and software to monitor, analyze, and optimize the production process. These also need trained talents to use and make the most of the data presented.

    As an example, defect inspection tools help to scan wafers at various stages of the process steps. It allows the detection of physical defects. Similarly, Scanning Electron Microscopes (SEMs) provide high-resolution imaging to detect and analyze weaknesses that are not possible due to defect inspection tools – mainly as part of failure analysis.

    At the testing stages, a flawless yield management software that can work synchronously with Automated Test Equipment (ATE) to capture, process, and analyze vast amounts of data is needed. This end-to-end data is crucial in ensuring the yield out of the wafer is in line with the specification, and YMS tools offer various features to facilitate the review process that may arise due to any excursions or issues during testing.

    In line with YMS, there is Statistical Process Control (SPC) Software, too, that ensures in real-time, the process remains within specified limits. On top of that, many of the SPC tools have different types of rules that can capture test-driven deviations in real during silicon wafer testing.


    Picture By Chetan Arvind Patil

    Semiconductor yield management is a multifaceted domain that combines technical expertise with analytical and interpersonal skills. Some of the top-level skills a talent needs to have are:

    Analytics: Yield management is analyzing large amounts of data to identify patterns, anomalies, and potential causes of defects. Professionals in this field must be skilled in dissecting data, using statistical tools, and deriving actionable insights from complex datasets. Additionally, having the ability to write automated tools in various programming languages is an added advantage.

    Focus: Manufacturing semiconductors involves thousands of intricate steps, and even the slightest deviation can lead to defects. It is essential to possess a keen eye for detail to identify and rectify minute discrepancies that others might overlook. Additionally, understanding different sets of fabrication process steps and how to connect the anomalies is also crucial for ensuring a successful outcome.

    Problem-Solving: Yield management professionals must quickly diagnose and resolve issues during the production process, using a structured approach to creative problem-solving.

    Communication: Interdisciplinary teams of yield management professionals must communicate effectively to relay findings, suggest improvements, and ensure corrective measures are implemented. It is like a decision review system (as in Cricket), where the solution goes through a review before it gets deployed.

    Collaboration: Yield improvement is a collective effort. Collaboration among professionals from various functions is crucial for yield improvement.

    Decision: With the vast data, yield management professionals must make timely and informed decisions, balancing the trade-offs between quality, cost, and production speed.

    Traditionally, the semiconductor end-to-end data analysis is not only challenging to capture (due to disaggregated tools and equipment) but is also costly. The costly aspect is due to the need to deploy a proper set of hardware and software systems that can provide end-to-end data capturing.

    So far, many yield management flows in the semiconductor industry have yet to adopt the cloud and AI strategy. And, with AI usage increasing, it has become crucial for semiconductor companies to not only embrace AI solutions but also move or adopt cloud solutions to ensure the AI features can be developed at scale, as both go hand in hand.

    Cloud: With the increasing complexity of semiconductor designs and processes, the amount of data generated during manufacturing has grown exponentially. Cloud platforms offer vast storage capabilities and enable seamless access to data across global manufacturing sites. This centralization facilitates real-time data analysis and quicker decision-making. Semiconductor engineers and data scientists from different locations can simultaneously analyze data, discuss findings, and implement solutions, ensuring that yield optimization strategies are consistent across facilities.

    Art Of AI: AI algorithms can predict potential yield issues based on historical data. This proactive approach allows manufacturers to address potential problems before they manifest, ensuring higher yields. Traditional methods can be time-consuming. AI can quickly sift through vast datasets to pinpoint the root causes of defects, accelerating the troubleshooting process.

    Hurdles: Transitioning to the cloud brings concerns about data security and intellectual property protection. Semiconductor firms need robust security protocols to prevent breaches. Integrating cloud and AI solutions with existing infrastructure and workflows can be complex.

    Future: The fusion of cloud computing and AI in semiconductor yield management promises a future of higher yields, reduced production costs, and faster time-to-market. As these technologies mature and the industry overcomes the initial hurdles, one can expect a more streamlined, efficient, and responsive semiconductor manufacturing landscape. The focus will shift from reactive problem-solving to proactive yield optimization, ushering in a new era of semiconductor production excellence.

    Semiconductor yield management has evolved from manual inspections and rudimentary data analysis in the past to sophisticated real-time monitoring and advanced analytics in the present.

    As the semiconductor industry looks to the future, integrating cloud computing and AI promises unprecedented optimization, driving yields to new heights and redefining industry standards.


  • The Semiconductor AISoC Platform Alliance

    The Semiconductor AISoC Platform Alliance

    Image Generated Using Adobe Firefly


    The realm of artificial intelligence (AI) is experiencing a transformative shift, primarily driven by innovative AI chips developed by startups. These enterprises are challenging the status quo, bringing cutting-edge chip architectures tailored to optimize deep learning, neural networks, and other AI tasks.

    By pushing the boundaries of processing speed, energy efficiency, and on-chip intelligence, these startups are enhancing AI’s capabilities and making it more accessible.

    To take this to a new level, early this week, several leading AI chip startups, including Ampere, Cerebras Systems, Furiosa, Graphcore, and others, announced a consortium called the AI Platform Alliance. Spearheaded by Ampere, this alliance seeks to make AI platforms more open, efficient, and sustainable.

    Image Source: AI Platform Alliance

    As the AI Platform Alliance progresses further, there will be more startups that will join the alliance. In the long run, bringing more silicon chip ideas forward will be an excellent initiative. Also, some of the critical areas where this AI Platform Alliance can be crucial:

    Software: Making it more accessible for emerging AI silicon startups to create silicon chips by quickly enabling the porting of existing applications. Something which several of the AI chip startups struggle with.

    Open Source: Enabling more open-source AI silicon chip initiatives that could make it easier for future startups to bring their products to market quickly.

    Standards: By providing more standardized AI chip-focused protocols to lower the new startups’ technology barrier.

    Benchmarking: Coming up with more standardized AI-focused benchmarking that can bring reliable comparison across silicon architectures.


    Picture By Chetan Arvind Patil

    Let us also look at the companies/startups developing silicon-level technology to drive AISoC design, which also leads the efforts to launch the AI Platform Alliance.

    Ampere Computing: Ampere Computing is a company known for designing and developing cloud-native Arm-based processors, primarily targeting data centers and cloud computing environments. The company utilizes the Arm architecture for its processors.

    Cerebras Systems: Cerebras Systems is a pioneering technology company known for its groundbreaking work in artificial intelligence (AI) hardware. Their flagship product, the Cerebras Wafer Scale Engine (WSE), stands out as the world’s most significant semiconductor device, encompassing an entire silicon wafer. Unlike traditional chip manufacturing, where individual chips are from a silicon wafer, the WSE utilizes the whole wafer, resulting in a single, massive chip with over 1.2 trillion transistors and 400,000 AI-optimized cores. Cerebras aims to accelerate deep learning tasks and push the boundaries of AI computational capabilities.

    FurisoaAI: FuriosaAI is an AI chip startup company that creates next-generation NPU products to help you unlock the next frontier of AI deployment. Next year, FurisoAI is gearing up to launch a High Bandwidth Memory 3 (HBM3) powered silicon chip that can provide H100-level performance to power Chat-GPT-scale models.

    Graphcore: Graphcore is a notable artificial intelligence (AI) hardware player. Established in 2016 and headquartered in Bristol, UK, the company has made significant strides in developing specialized processors for machine learning and AI applications. Their primary product is the Intelligence Processing Unit (IPU), a novel chip architecture designed from the ground up to accelerate both training and inference tasks in deep learning.

    Kalray: Not a startup. Kalray is a technology company specializing in designing and developing multi-core processors for embedded and data center applications. Founded in 2008 and headquartered in Grenoble, France, Kalray’s primary offering is the MPPA (Massively Parallel Processor Array) technology. This unique processor architecture is designed to provide a high level of computing power while maintaining energy efficiency, making it suitable for applications where both performance and low power consumption are crucial.

    Kinara: Led by Silicon Valley veterans and a development team in India, Kinara focuses on Edge AI processors and modules that deliver scalable performance options to support applications with stringent power demands or the highest computing requirements. It has launched Ara-1, an edge AI Processor to provide the ideal balance of computing performance and power efficiency to optimize intelligent applications at the border.

    Luminous Computing: In the early stage of silicon chip development. Luminous is focusing on building the most powerful, scalable AI accelerator.

    Neuchips: Neuchips is an AI ASIC solution focusing on signal processing, neural networks, and circuit design. It also has a good portfolio of products that range from highly accurate AI computing engines to efficient recommendation inference engines.

    Rebellions: Focused on bringing the world’s best inference performance for Edge and Cloud Computing. Their ATOM series delivers uncompromised inference performance across different ML tasks, computer vision, natural language processing, and recommendation models.

    SAPEON: Focused on building a Hyper-Cloud AI Processor. SAPEON has an optimal architecture for low-latency, large-scale inference of deep neural networks. They have already launched products designed to process artificial intelligence tasks faster, using less power by efficiently processing large amounts of data simultaneously.

    Given the surge in AI’s popularity, there’s an increasing demand for computing power, with AI inferencing needing up to 10 times more computing over its lifespan. Such an alliance can help improve power and cost efficiency in AI hardware to surpass GPU performance levels.

    AI Platform Alliance plans to create community-developed AI solutions, emphasizing openness, efficiency, and responsible, sustainable infrastructure. It is undoubtedly a significant step towards creating a new class of AISoCs.


  • The Minor And Critical Semiconductor Test Engineering Differences At Matured And Advanced Node

    The Minor And Critical Semiconductor Test Engineering Differences At Matured And Advanced Node

    Image Generated Using Adobe Firefly


    Semiconductor test engineering is a specialized domain within the semiconductor industry that ensures integrated circuits (ICs) function correctly and meet industry standards before delivery.

    The progression from matured nodes (above 40nm) to advanced nodes (at and below 40nm) has brought about profound changes in the domain of test engineering.

    As an example:

    Coverage: Ensuring complete test coverage becomes challenging as node sizes shrink and transistor counts increase. Test engineers must develop strategies to cover more potential defect sites without increasing test time and, thus, the cost, which is challenging.

    Variations: Advanced nodes are more susceptible to process variations. It emphasizes testing for parametric variations to ensure every chip meets performance and power specifications. It requires collecting quality data during fabrication and ensuring all defective parts get captured during fabrication or testing. All of this adds to the productization cost.

    Defect Mechanisms: As nodes have scaled down, new defect mechanisms related to manufacturing, like random defects and systematic variation, have emerged, requiring new test methodologies. It has increased the cost of capturing these as more sophisticated data and inspection software tools are needed.

    Power: Advanced nodes have power management techniques like multiple power domains, dynamic voltage scaling, and other state machine-driven power requirements. Testing such features requires specialized test patterns and methodologies. Doing so means investing in ATE configurations that can provide the needed setup.

    The dense transistor count on newer nodes made achieving comprehensive test coverage more difficult. Moreover, these advanced nodes have vulnerability to process variations, demanding rigorous testing to ensure consistent performance and power metrics.


    Picture By Chetan Arvind Patil

    The evolution of semiconductor nodes also means complexity in Design for Testability (DFT) has risen, with advanced nodes demanding refined techniques to cater to innovations like 3D stacked ICs and necessitating through-silicon via (TSV) tests. Concurrently, methods such as logic built-in self-test (LBIST) and memory built-in self-test (MBIST) have undergone significant transformations.

    DFT (Design for Testability) Complexity: As nodes have advanced, DFT techniques have become more sophisticated. They are demanding more test hooks to enable coverage.

    Cost Considerations: The cost of testing has been rising, especially for advanced nodes, due to the need to use high-cost ATE systems. It has led to a focus on reducing test times without compromising coverage and adopting more concurrent testing strategies.

    Reliability Testing: With the proliferation of devices in critical applications (e.g., medical, automotive), there is an increased emphasis on testing for reliability, longevity, and resistance to conditions like high temperature and radiation. As more transistors get packed in the smallest area possible, test escapes are always possible, which can lead to quality concerns.

    Data and Machine Learning: With the vast amount of data generated during testing, there is an increased emphasis on using machine learning algorithms to predict defects, optimize test sequences, and improve yield. It is more valid at matured nodes due to the wafer size and the need to develop cost-mitigation techniques to reduce the need to redo a similar process with a test program.

    As the semiconductor industry moved from matured to advanced nodes, test engineering has evolved from focusing on basic functionality checks to a comprehensive discipline that ensures performance, power efficiency, reliability, and safety across intricate designs and multifaceted applications. It invited the problem of cost and time to market and, in many cases, made the productization more complex than ever.

    This story is bound to continue with chiplets, ultra-advanced, and other More-Than-Moore methodologies.


  • How Artificial Intelligence Helps With Semiconductor Wafer Processing

    How Artificial Intelligence Helps With Semiconductor Wafer Processing

    Image Generated Using Adobe Firefly


    Semiconductor wafer processing is an intricate series of steps to produce integrated circuits on silicon wafers. Artificial Intelligence (AI) has proven to be a powerful tool in refining and enhancing these processes.

    AI offers a range of applications in semiconductor wafer processing. Leveraging AI technologies can significantly improve efficiency, yield, and quality. Below are some of the key ways in which AI assists in semiconductor wafer processing:

    Process Monitoring:

    AI can monitor in real-time the various process parameters during wafer fabrication. AI can predict potential issues by analyzing this data and making real-time adjustments to maintain optimal conditions, ensuring consistent quality across wafers and batches.

    Predictive Maintenance:

    Tools and machines used in wafer processing need regular maintenance. Using AI and machine learning algorithms, predicting when a tool/machine will likely fail or require maintenance is possible. It reduces unplanned downtime and ensures maximum uptime, which is crucial for high-volume manufacturing environments.

    Defect Detection:

    Inspecting wafers for defects is a critical step. Using advanced image recognition and machine learning, AI can quickly scan and identify weaknesses that might be hard for a human to detect. Early defect detection can lead to process improvements and reduced wastage.

    Process Optimization:

    By collecting and analyzing vast amounts of data from various process steps, AI can help optimize process parameters. It not only increases yield but also improves the overall efficiency of the wafer production line.

    Besides optimizing processes and detecting issues with the silicon wafer, AI solutions are also being deployed to model and predict future scenarios.


    Picture By Chetan Arvind Patil

    By utilizing prediction techniques, the wafer process is more data-driven and can enable optimized flow that can help in increasing the wafer processed per hour.

    Modeling:

    AI can help create accurate models of semiconductor processes, facilitating virtual experiments. It aids in understanding potential outcomes without actually running expensive real-world experiments.

    Yield Prediction:

    By analyzing historical data and real-time inputs, AI models can predict the yield of a particular batch of wafers. It helps make informed decisions regarding resource allocation, process adjustments, and inventory management.

    Optimal Utilization:

    AI can guide the optimal use of gases, chemicals, and other resources in the wafer processing pipeline, ensuring minimal waste and cost efficiency.

    Data Analysis:

    Semiconductor processing generates vast amounts of data. AI can analyze this data faster and more accurately than traditional methods, extracting valuable insights that can lead to process improvements.

    Integrating AI into semiconductor wafer processing has proved to be an efficient and cost-effective way to ensure high-quality products with a better yield.

    As AI technologies evolve and become even more sophisticated, their role in the semiconductor industry will expand, resulting in further innovations.


  • Why The Semiconductor Industry Is Speedily Adopting Advanced Packaging

    Why The Semiconductor Industry Is Speedily Adopting Advanced Packaging

    Image Generated Using Adobe Firefly


    Semiconductor packaging is one of the last steps in semiconductor manufacturing and a crucial one. Protecting the silicon die from physical, mechanical, and thermal damage is only possible with a packaging solution. Silicon package technologies are also vital in developing interfaces that enable new architectures, like chiplet based XPUs.

    Historically, ever since the first silicon, there have been efforts to accommodate new types of package technologies by adapting the requirements of the end application. It has been in line with Moore’s law.

    Lately, one such adoption is embracing the advanced packaging technique. It is a needed solution that enables the path towards the More-Than-Moore era.

    So, what does advanced packaging mean? In simple terms, advanced semiconductor packaging refers to assembly techniques developed to provide technical solutions that enable interconnections of multiple dies.


    Picture By Chetan Arvind Patil

    And why does the semiconductor industry need to adopt advanced packaging speedily? There are several reasons. Below are the few top ones:

    Performance:

    A single die and how much performance it can drive directly correlates to the physics of the semiconductor. Traditionally, it means the transistors and how fast they can shrink to double the number per the same area. Considering the ever-increasing demand for performance by the applications, it is slowly becoming impossible to cater to the advanced requirements (mainly for XPUs that are stretched to their limits by the AI applications). It is where advanced packaging comes into the picture and allows the integration of homogeneous and heterogeneous silicon die to create a more robust system of silicon chips. All of which leads to better performance.

    Functionality:

    Chiplet is the most talked about topic in the computing and silicon industry. The most vital aspect of chiplet is not the desegregation of the silicon die into multiple chiplets or the fabrication and testing of it, but the packaging aspect. It is valuable if the chiplets can get integrated without affecting their functionality. It is where the advanced packaging is applicable. The System-in-Package (SiP) solution is one such example of advanced packaging. It allows better integration of multiple dies and suits SoCs utilizing ultra-advanced nodes.

    Node:

    Advanced packaging provides the ability to keep utilizing the matured process nodes. It is to be done mainly through chiplets, which offer the ability to integrate the latest and matured nodes and then use advanced packaging to enable better node utilization, which means not having to rely on one specific technology node.

    Innovation:

    One of the most critical impacts of advanced packaging is to enable new silicon design. Whether by utilizing 2D, 2.5D, or 3D integration, advanced packaging provides an avenue to innovate that is not limited to die-level innovation. With advanced packaging, it has become possible to develop better integration techniques to overcome the limit of space-constrained die.

    While there are benefits to embracing advanced packaging solutions, it also comes with its challenges.

    The major challenge is for the semiconductor assembly vendors, who must invest the capital to develop facilities that can enable such advanced integrations and consistently develop process features to ensure the end cost is ROI-friendly.

    With Moore’s Law’s diminishing returns regarding cost and performance benefits. Advanced packaging is here to play a crucial role in continuing the evolution of semiconductor capabilities and meeting the demands of new-age computing systems.

    Advanced packaging addresses both technological challenges and market demands by allowing for improved performance, functionality, node utilization and driving innovation. Thus, it has become a solution that semiconductor assembly vendors are rushing to develop, and the foundries are getting into it.

    The next few years will likely see the most activity in this domain and can provide a new trajectory for the semiconductor assembly industry.


  • The Hurdles In Adopting Chiplets As Semiconductor More-Than-Moore Solution

    The Hurdles In Adopting Chiplets As Semiconductor More-Than-Moore Solution

    Photo by Laura Ockel on Unsplash


    Chiplet is undoubtedly the most suitable semiconductor design and manufacturing solution to scale complex chips reaching the reticle limit. Several silicon products have already adopted chiplet, mainly processors. These products have showcased that using multiple chiplet can significantly improve performance. All while not worrying about the reticle or device constraints.

    Several initiatives have come up to speed up the adoption rate of chiplet. Like, Universal Chiplet Interconnect Express (UCIe), which is an open specification for bridging different types of chiplets from the same or multiple vendors by standardizing the interconnectivity, quality, reliability, and several other aspects that are critical to a chip build using the disaggregated approach. Similarly, a few more industry and academia consortiums have emerged, focusing on several criteria to ramp up mass chiplet adoption.

    Even with all these steps, certain fundamental hurdles remain, and careful planning is needed before any company uses chiplets to develop silicon chips.

    Wafer management is one such hurdle. As the number of chiplets per silicon design increases, so will the number of wafers, adding cost to the already costly process.

    Wafer Management:

    – A chip designed using chiplets methodology will consist of multiple wafers, with each wafer focused on producing a specific silicon chiplet

    – As an example, an XPU with N chiplets will require N number of wafers, and thus, as many wafer management flows

    – Merging these N wafers into a single package via heterogeneous integration is going to be complex, time-sensitive, and prone to errors

    – Apart from this, the fabricating, testing, and assembly process of each of these wafers will add extra cost that may be more than the aggregated approach

    With each wafer comes the challenge of achieving the reference yield.

    Yield:

    – Chiplets with less complex designs (due to the splitting of larger die areas into multiple chiplets) will still have to go through the required yield check process

    – The more the number of chiplets, the more time intensive the yield check becomes (multiple data sources)

    – The hurdle here is also to manage the specifications

    – So far, the best possible way to mitigate this would be to consolidate the larger block into a single chiplet and repeat the same with other blocks of a larger silicon chip

    – Even then, the questions of managing and achieving the yield remain, and multiple-chiplet only adds to the already complicated flow


    Picture By Chetan Arvind Patil

    Efficient testing of chiplet is another hurdle.

    Test:

    – Every chiplet will have a unique wafer, and if multiple such wafers are consolidated to create an end silicon chip, testing is crucial and, by default, part of the process flow

    – With multiple flows, the amount of effort needed to test increases.

    – It leads to more resources and testing hardware

    – Eventually, adding to the testing cost

    – If this is more than the cost of testing an aggregated chip, the industry must develop a process to optimize it

    To turn multiple chiplets into working silicon chips is also about managing overall development costs. Today, the cost of chiplet will be higher than the aggregated approach due to increased resources required by the multiple chiplets.

    Cost:

    – Fabricating, testing, and assembly cost is increasing with the increase in the test complexities

    – When working with chiplets, managing the expense to prevent higher costs of working with multiple wafers (each wafer = chiplet) is crucial. Otherwise, it will not seen as a valid alternative aggregated flow

    – Chiplet should be seen as a More-Than-Moore solution but also as a cost-optimization solution

    – Eventually, the goal is also to manage the increased cost of designing chips as large as a reticle (26 mm by 33 mm)

    – Lastly, the human resources required for chiplet will be more than needed for an aggregated approach. Which will add to the cost too

    No doubt, chiplets have the potential to address several aggregated technical and business challenges. However, the chiplets will be limited to certain types of silicon chips, like XPUs.

    For chiplets to be disruptive, mentioned manufacturing flow-driven challenges should be resolved. Otherwise, the journey towards and beyond sub-1nm will be more aggregated than disaggregated.


  • Chipping In The Talent: Impact Of CHIPS And Science Act On Building The Future Semiconductor Workforce

    Chipping In The Talent: Impact Of CHIPS And Science Act On Building The Future Semiconductor Workforce

    Photo by Edwin Andrade on Unsplash


    Congress passed the CHIPS And Science Act a year ago. Since then, many positive activities have rightly shaped the future of the U.S. semiconductor industry.

    It is valid for all the critical pillars of the semiconductor industry. Right from an increase in semiconductor-focused research, aligning education requirements with semiconductor industry needs, building and expanding semiconductor FABs, and re-skilling the workforce to bring them on board the semiconductor industry.

    Source: SIA

    Out of all these critical semiconductor pillars, the one that will empower the future of the U.S. semiconductor industry the most is the semiconducting education and the workforce.

    As per the joint report of the Semiconductor Industry Association (SIA) and Oxford Economics, close to 67,000 jobs will only go unfilled by 2030 across semiconductor manufacturing and design in the absence of any corrective measures.

    Source: McKinsey & Company

    Fortunately, Congress passed the CHIPS And Science Act at the right moment with greater emphasis on semiconductor education and workforce training. With close to $200 billion allocated to creating workforce-related programs. As an example:

    Creating new institutes, like the National Semiconductor Technology Center, will focus on workforce and training institutions apart from making the right platform to move semiconductor innovation forward.

    Development of the National Advanced Packaging Manufacturing Program, which will enable cross-collaboration between industry and academia to deepen the research activities along with training workforce to align with future requirements

    CHIPS for America Workforce and Education Fund will help develop a domestic semiconductor workforce to mitigate the near-term labor shortage by utilizing the programs under National Science Foundations.


    Picture By Chetan Arvind Patil

    Due to such support from the CHIPS And Science Act, several universities and state bodies across the U.S. have kickstarted semiconductor programs to attract students into semiconductors as an education and career option.

    As an example:

    Arizona State University is working with industry to create the correct set of future workforce and has also developed a certification program for students and professionals.

    CHIPS And Science have also helped Purdue University offer a range of semiconductor and microelectronics-focused courses and exclusive degree programs, including collaboration with upcoming international regions like India, which accounts for more than 20% of the global semiconductor workforce.

    The state of Michigan is building a semiconductor workforce by investing in programs to bring Pre-K-12 and postsecondary students to careers in Science, technology, engineering, and math (STEM) fields.

    The University of Arkansas is constructing the first-ever national Multi-User Silicon Carbide Research and Fabrication Facility, which will train the workforce to focus on SiC technologies that will power EVs and other applications requiring energy-efficient silicon chips.

    Northwestern University has also started a Master’s program with a semiconductor specialization by creating coursework around VLSI, Computer Architecture, Algorithms, and various other semiconductors-focused courses.

    Purdue University has also created the first-ever Virtual FAB LAB that will speed up the training and onboarding of entry-level semiconductor professionals.

    Ohio State University is leading the new interdisciplinary Center for Advanced Semiconductor Fabrication Research and Education in collaboration with ten in-state colleges and universities.

    Arizona community colleges have created a curriculum to focus on training future technicians.

    SEMI has also created virtual courses to drive continuous education around semiconductor

    Arizona State University has partnered with Applied Materials to create a Materials-To-Fab (MTF) facility.

    NXP Semiconductors, ASU, and Advantest have collaborated to train the semiconductor testing workforce, which plays a crucial role in bringing the silicon chip to life

    The commonwealth of Virginia (VDEP) and Virginia Tech are adapting to meet the demand for producing highly-skilled workers in the semiconductor industry. [Updated on 02/14/2024 – Credit Andrea Hill]

    Image Source: ASU

    Even on the research front, NSF has already awarded numerous contracts and funds to spearhead semiconductor research, helping universities, faculties, and graduate students focus on much-needed research solutions. From process nodes to physics that make up the semiconductor EUV equipment. All of which will feed into the US’s existing and upcoming semiconductor manufacturing facilities.

    The semiconductor industry workforce requires a mix of technicians, bachelor’s and master’s degree holders, along with Ph.D. holders to drive research activities.

    Image Source: National Science Foundation

    To continually train and fill the gap, the universities are already developing educational programs and semiconductor-focused educational systems to build the American semiconductor workforce, all due to the focus created by the CHIPS And Science Act.

    Slowly but steadily, these measures will bridge the gap between the demand and supply of the semiconductor workforce. By 2030, there will be near-perfect harmony between industry requirements and academic training programs.


  • The CHIPS And Science Act Is Reestablishing The Needed Pillars For American Semiconductor Ecosystem

    The CHIPS And Science Act Is Reestablishing The Needed Pillars For American Semiconductor Ecosystem

    Photo by Mayer Tawfik on Unsplash


    It has been one year since the US passed the CHIPS And Science Act. Ever since then, the American semiconductor ecosystem has seen a lot of activities and ground progress.

    It is evident from the expansion of Intel’s network of FABs in the US and the almost complete 5nm ultra-advanced node (subsequently 3nm) FAB21 of TSMC in Phoenix, Arizona. The CHIPS And Science Act has also re-energized the semiconductor ecosystem in Texas, where Samsung is creating a new semiconductor cluster in the City of Taylor, Texas. Similarly, Micron Technologies, GlobalFoundries, Texas Instruments, SkyWater, and many other semiconductor Pure Play and IDMs have planned new semiconductor manufacturing plants.

    It is not only the semiconductor manufacturing part of the supply chain that the CHIPS And Science Act has touched. This act has rightly focused on several other critical parts of the semiconductor ecosystem, including new semiconductor-focused university programs, research funds, and workforce reskilling.

    For a semiconductor ecosystem to thrive in any country, the focus must be on these critical pillars (Academia, Research, Workforce, Infrastructure), and the CHIPS And Science Act has started to impact all of these.

    Image Source: Purdue University

    Academia:

    Progress of semiconductor technologies is highly dependent on the workforce, and this act has provision to invest in training and research activities around semiconductor education.

    For example, it has helped Arizona State University work with industry to create the correct set of future workforce and has also developed a certification program for students and professionals.

    The act has helped Purdue University offer a range of semiconductor and microelectronics-focused courses and exclusive degree programs, including collaboration with upcoming international regions like India.

    Similarly, several other universities have started structuring programs focusing mainly on semiconductors.

    Image Source: National Science Foundation

    Research: 

    The impact of the CHIPS and Science Act has started showing up in the form of an increase in research activities by the National Science Foundation (NSF).

    NSF has already awarded numerous contracts and funds to spearhead the semiconductor research, now helping universities, faculties, and graduate students focus on the much-needed research solutions on the process node to physics that make up the semiconductor EUV equipment. All of which will feed into the US’s existing and upcoming semiconductor manufacturing facilities.


    Picture By Chetan Arvind Patil

    Workforce:

    With the recent shortage of semiconductor workforce, the CHIPS And Science Act has rightly set the focus on this front, and that too at the much needed time.

    For example, Purdue University has started the Summer Training, Awareness, and Readiness for Semiconductors (STARS) program—an eight-week course to develop skills in fabrication, packaging, and semiconductor device and materials characterization.

    Similarly, The First Virtual Cleanroom Environment for Semiconductor Manufacturing Technology was released to help professionals re-train themselves (or new workforce) to align with the advancing fabrication process. It is also slowly becoming a valuable tool to train the new fab workforce.

    Image Source: MAPT

    Infrastructures:

    Apart from the existing push from the private semiconductor FAB players, the CHIPS And Science Acts also focus on futuristic and advanced packaging technologies, which are the core of semiconductor product development.

    In a similar line, the National Institute for Standards and Technology (NIST) has set a clear focus on establishing the National Semiconductor Technology Center (NSTC) and National Advanced Packaging Manufacturing Program (NAPMP) that will enable America’s ability to develop the chips and technologies of the future to safeguard America’s global innovation leadership.

    The Microelectronics and Advanced Packaging Technologies Roadmap (MAPT) shows how NIST (in collaboration with SRC) will focus on advancing back-end semiconductor manufacturing innovation.

    Image Source: TSMC LinkedIn

    Within one year, the CHIPS And Science Act has helped the semiconductor ecosystem in the US to focus on the much-needed pillars of semiconductor-focused academic programs, research, workforce, and infrastructures.

    More results will be seen in a few years when new FABs, Assembly, And Testing facilities are operational and have access to the trained workforce.

    So far, the vision of re-establishing the end-to-end semiconductor supply chain in the US is on track, eventually bringing in new waves of semiconductor-driven innovation and growth.


  • The Semiconductor Technology Node India Should Focus On

    Photo by Maxence Pira on Unsplash


    India is betting on end-to-end semiconductor manufacturing, which is the right step to meet the growing demand for the country’s semiconductor requirements. If executed as planned, the import of silicon chips will drastically decrease and will remove the dependency on the other countries.

    The technology node is crucial to getting the semiconductor manufacturing strategy correct. It is at the core of any silicon chip that ever gets produced, and for a country like India, where the majority of the semiconductor manufacturing (mainly the FAB/fabrication part) is confined to public units and caters to the country’s space, defense, and other critical national infrastructure needs. Leveraging these foundations and extending the technology node capabilities to develop more India-centric process technologies is more beneficial.

    The Technology Node (Also Process Node, Process Technology, Or Simply Node) Refers To A Specific Semiconductor Manufacturing Process And Its Design Rules.

    Thus, considering the capabilities of India’s public semiconductor FAB, it is more suitable to aim for 140nm and above. The reason ranges from cost, yield, process control, and market demand. 140nm is also perfect due to the 180nm capabilities of the Semi-Conductor Laboratory, which can extend to not only develop CMOS devices to fabricate 140nm but can potentially ensure that these facilities also get upgraded – a win-win situation for all.

    If the SCL cannot develop 140nm in-house, there is also a potential to tie up with established Pure-Play vendors. Similar to Maruti-Suzuki JV. Regarding market potential, 140nm CMOS, including BiCMOS semiconductor technology, is still in high demand, mainly in the industrial, automotive, mobile, and other computing sectors.


    Picture By Chetan Arvind Patil

    The semiconductor equipment and setup cost for 140nm and above CMOS/BiCOMS semiconductor FAB is relatively lower than the advanced nodes. Yield is better, and the application area is far greater. Focusing on anything lower, mainly the advanced nodes like 7nm, etc., will be a step in the wrong direction unless a private player is willing to set up FAB by utilizing the incentives and spending endless resources.

    Benefits of 140nm and above CMOS/BiCMOS:

    Proven: 140nm and above is an established and widely used technology node that can speed up India’s semiconductor fabrication and manufacturing entry.

    Cost: 140nm and above is more cost-friendly, and the equipment is easy to procure, including utilizing the used equipment market.

    Yield: The established knowledge around 140nm and above CMOS can ensure the bring-up time of new FAB is fast.

    ROI: The break-even and ROI for 140nm and above technology nodes will be much faster than any other process node as it fits the edge of market requirements.

    While having an advanced node FAB in India can be a game-changer, but only if a private player does it. If the focus is to get Indian companies to set up FAB in India (like Vedanta, etc.), creating a tech JV with SCL and other players to develop process flow for 140nm and above indigenously could be a more significant breakthrough.