Category: DATA

  • The Semiconductor Chips For Data Centers

    The Semiconductor Chips For Data Centers

    Photo by Taylor Vick on Unsplash


    THE BUILDING BLOCKS OF SEMICONDUCTOR CHIPS FOR DATA CENTERS

    The connected world is leading to real-time information exchange, and this is why consumers to enterprises expect the request to be processed in the fastest time possible. The devices used to send such requests can only process and store a certain amount of data. Anything beyond a threshold requires the use of data centers, and that also means transferring/receiving data over the air.

    The computing industry has relied on data centers since mainframe days. However, the importance of data centers has mainly grown due to the connected systems. These data centers have to run 24×7 and also have to cater to numerous requests simultaneously.

    To ensure a quick and real-time response from data centers, three major systems have to work in synchronization:

    Software: If a smartphone user is sending the request, then the data needs to be encrypted in packets before sending it over the air to the remote location where the massive data centers are located. This means the software solutions, both on the client and the server-side, have to work in harmony. This is why software is the first major system required for accurate data center operation.

    Connection: The second major system is the network of wired and wireless systems that aid the transmission of data from the client to the server (data centers). If a robust connection is not available then data centers will be of no use.

    Hardware: The third and most critical piece is the silicon chip or hardware that makes up the data center. These tiny semiconductor chips end up catering to all the request that comes from different part of the world. To ensure the request is fulfilled in real-time, a smart silicon chip is also required that can handle the data-efficient without adding bottlenecks.

    The growing internet user base along with data-driven computing solutions has to lead to the high demand for data centers. To cater to all such growing services, different types of data centers are required. Some data centers are small in size (less number of servers) and some are giant. Data centers with more than 5000 servers are also called hyperscale data centers. In 2020, there were more than 500 hyperscale data centers running 24×7 and were catering to request coming from any part of the world.

    Data Centers Require Different Types Of Semiconductor Chips.

    To run these hyperscale data centers requires large facilities, but the key piece is still the tiny semiconductor chips that have to run all the time to handle different types of requests. Due to the growing focus on data centers, there is a need to change the way new semiconductor chips are being designed for data center usage.

    This is why all the semiconductor chip solutions that end up getting used in the data centers should be built around the following blocks:

    Processing: Semiconductor chips for data centers should be designed to process not only a large amount of data but also a new type of data. This requires designing semiconductor chips that can cater to the request in the shortest time possible, while also ensuring there are no errors during the processing.

    Security: Data centers receive different types of data processing requests and this data can have any information from credit card processing requests to personal information to login credentials. Semiconductor products by default have to focus on the security aspect when designing silicon solutions for data center usage.

    Workloads: Rapid software development has lead to different types of data. Data eventually lead to the formation of workloads that the computing system has to process. Given the rise of AI/ML/DL, there is a need to process the data elegantly. This requires doing away with traditional processing blocks and instead of adapting to more workload-centric architecture that can enable a high level of parallelism to train and infer information out of it.

    Adaptive: Smart world not only requires data capturing but also demands adaptive decisions. This often requires on-the-go training and modeling to ensure the user request is fulfilled intuitively. This is why there is a demand to drive AI drive architectures that can train data efficiently (eFPGA or NPU) and ensure any new (and never seen) request is handled without errors.

    Storage: Memory is one of the major building blocks of the computing world. The surge in the use of cloud storage is leading to new innovative storage systems that can provide more storage per dollar. This requires driving new semiconductor solutions so that data centers become powerful but at the same time are compact enough to not consume a large amount of energy.

    Efficiency: Data centers are considered to be one of the most power-hungry systems in the world. The year-on-year growth in hyperscale data centers is only going to increase the power consumption. To balance the processing need with the power consumption, semiconductor solutions have to consider the energy consumption per user request. By building an efficient semiconductor chip, the data centers can expand in number without impact the total power consumption.

    The above building blocks are not specific to XPUs for data centers only. These blocks are valid for another type of semiconductor chip that data centers require. This can be a networking solution (PCIe) or a new data transfer (HBM) interface. Eventually, the above points discussed are the major reasons why data centers require different types of semiconductor chips.


    Picture By Chetan Arvind Patil

    Picture By Chetan Arvind Patil

    THE BOTTLENECKS TO DRIVE NEXT-GEN SEMICONDUCTOR CHIPS FOR DATA CENTERS

    Designing and manufacturing semiconductor chips for any type of solution requires a thorough understanding of different issues and opportunities. The same strategy is applicable when coming up with semiconductor solutions for data centers. For data centers, the complexity is much more than consumer systems mainly due to the need to provide bottleneck-free semiconductor products that run all the time.

    Over the past few years, new data center-focused semiconductor companies have emerged and they have been providing different solutions. Fungible’s Data Processing Unit and Ampere’s ARM-powered XPU are a couple of such examples. However, in the end, the goal of all the semiconductor solutions is to focus on a set of features that ensures all the request is catered by the data centers in the real-time without adding any bottlenecks.

    When it comes to bottlenecks in the world of computing, the list is endless. These bottlenecks can originate either from software or hardware. Eventually, the software features have to be mapped onto the hardware so that both software and hardware can work in synchronization to drive next-gen solutions.

    The next-gen semiconductor chips need to focus on a few criteria’s to drive bottleneck free semiconductor powered data centers:

    Features: Traditional semiconductor chips for data centers were (and still are) purely focused on performance. As the world is increasingly adopting connected systems, there is growing demand to balance performance with efficiency. This requires a new set of features that can ensure that tomorrow’s data centers are more efficient than today’s. These features can range from using advanced transistor-level techniques to new packaging solutions.

    Data: The amount of data that hyperscale data centers have to crunch will keep increasing every year. The storage aspect of it will also grow along with it. This growth is leading to huge cooling systems and thus adds to total energy requirements. This challenge of managing data while lowering the impact on power consumption is pushing new solutions. More modular approaches are needed to drive next-gen semiconductor solutions.

    Parallelism: Any given chip of any type in a data center can receive any amount of request. To ensure there are no bottlenecks, intelligent parallelism techniques are required. Some of the parallelism techniques require software support but many often do require hardware features (cache, data pipeline, etc.) that can support parallelism. Networking and XPUs solutions often have to consider this problem while designing chips for data centers.

    Speed: While there is a growing concern of power consumption (by data centers) due to performance requirements, there is also demand to drive faster response out of data centers. This requires designing semiconductor chips for faster processing. Balancing the power, performance, and area aspect for data centers is becoming more difficult than ever. This is leading to more modular data centers but it is still going to demand semiconductor chips that can provide a high-speed solution without adding to the power requirement.

    Network: Data centers have to communicate with different systems that are located in remote areas, and such communication requires heavy usage of networking solutions. To drive communication efficiently, robust networking chips are required that can handle the data without any errors. This demands designing and manufacturing semiconductor solutions with reliability and error correction. In the long run, network chips are going to play a vital role and require the bottleneck-free design to drive new data centers.

    Architecture: Intel is considered the leader in XPU solutions for data centers. To design XPUs, Intel has been relying on its homegrown x86 architecture. In the last decade, the emerging workloads have changed a lot and that requires new XPU solutions. To provide newer solutions, emerging companies are focusing more on ARM and RISC-V to power their solutions. The major driving factor to use ARM or RISC-V is the ability to adapt and change the architecture to suit future requirements. Picking up the architecture is vital to avoid any kind of bottlenecks in the XPUs for next-gen data centers.

    In the last two years, the world has moved towards data center solutions mainly due to the remote feature required by different services. The growth in the number of smartphone and smart device users is also driving the need for new and efficient hyperscale data centers. To cater to the future demand of green hyperscale data centers, the existing and emerging semiconductor companies will keep coming up with a newer solution.

    In the long run, newer data-centric semiconductor solutions are only going to benefit the computing industry and the race to wind data centers has just begun.


  • The Costly Semiconductor Data

    Photo by Jorge Salvador on Unsplash


    THE COSTLY LIFE CYCLE OF SEMICONDUCTOR DATA

    The importance of data in different industries has only grown over the last few decades. This has in turn given rise to different new techniques and tools that enable enterprises to capture and analyze data on the go. It will not be wrong to say that today, the most prized commodity in the market is data.

    The same story is applicable in the semiconductor industry. The shrinking transistor size and the need to enable efficient devices has increased the importance of capturing data from all the possible semiconductor product development process. However, the major hurdle in doing so is the cost associated with the process.

    Semiconductor Data Is The Next-Gen Oil.

    Shrinking transistors enable better devices and empower the long-term product development roadmap. Though, when it comes to the cost part of it, things start to get complicated. Cost is also the major reason why there are only three players (Intel, Samsung, and TSMC) that are battling the 5/3 nm race. The cost required to set up the FAB and the support system (equipment, facilities, tools, resources, and many other things) is too high and often requires long-term investment-wise planning. Even building a MINI-FAB today, will require billions of dollars to set up, and there on will take years to break even.

    Setting up smaller research and development facility is an efficient way to capture semiconductor data, but it is not feasible to rely on smaller labs/setups for too long. In order to meet the worldwide demand, the facilities eventually have to expand.

    – MINI-FAB >$1 Billion

    – MEGA-FAB > $4 Billion

    – GIGA-FAB > $12 Billion

    – MINI/MEGA/GIGA = Defined Based On Wafer Capacity.

    This makes the process of capturing and handling the semiconductor data crucial. Any data point that comes out of the pre-silicon or post-silicon stage has to go through a specific life cycle before being stored for long-term usage. This life cycle data handling process itself adds additional cost apart from the FAB investment. In the long run, the semiconductor companies understand the importance of setting up the data life cycle flow and have always invested relentlessly both in the process that leads to silicon and also the process required to generate data out of different silicon products.

    Below is an overview of how the semiconductor data is handled and why each of these processes is vital. In nutshell, these steps are no different than how any big data will get handled. When it comes to the semiconductor industry, the major difference is the effort (cost and resources) it takes to generate data from different types of semiconductor products that often require large setups.

    Generation: Generating semiconductor data requires a silicon wafer (with dies that are testable) and a test program that can run on the full wafer. Both of these processes demand different sets of tools and resources. A dedicated FAB is tasked up with creating a silicon wafer that has the product printed (repeatedly) across the full area. Which in itself is a costly process. On the other hand, a dedicated tester environment (OSAT) with different hardware and equipment is required to drive the test program. Such a long and delicate process not just requires product but also manufacturing, logistics, handling, and equipment resource. The sum of all these investments eventually allows semiconductor data to be generated. And without going into details, is understood how costly and time-demanding process this is.

    Cleaning: Generating data out of the silicon is the first step. As explained above, it requires different sets of hardware equipment to drive the semiconductor data generation. The data in the majority of the cases are generated in a standard format, but still require a lot of post-processing and elimination techniques to make sense of it. This cleaning process is more on the software side and demands data processing tools that can help engineers understand different features of the silicon data. The cost associated is due to the setting up of the flow that allows semiconductor companies to capture the data at the source, which can then be easily sent to the servers for engineers to retrieve. There on, the cleaning steps start.

    Profiling: Simply collecting random silicon data is not useful. The semiconductor product itself is going to be used in different systems and environments. This environment will push the product through different operating conditions. To ensure the product works under different conditions (temperature, process variation, current, and voltage settings), the development phase pushes the product/silicon through several testing criteria. These are often based on the industry-accepted standards (AEC, JEDEC, IPC, etc.). Carrying out all these tests to gather the promising semiconductor data (that will qualify the semiconductor product for the market) is challenging. The cost associated with it is also on the rise and thus adds another costly layer towards capturing the semiconductor data.

    Mapping: The semiconductor data is often captured in big chucks. This can be at the wafer level or lot level. In both cases, it becomes really important to ensure that the data can be traced back to the die/part it originated from. The process to do so starts much before the test data is available. This can be via different marking to memory-based traceability techniques. This again points to the fact that the data mapping also requires base resources to achieve the target of not only ensuring the semiconductor data is available, but it is also easy to map it back to the source.

    Analysis: Post all the possible data is captured and is available for engineers, the main task starts. While clean data (no skews or deviations) is the dream of every engineer, however, even with the cleanest data it becomes crucial to question different aspects of it. And, if there is a fail part, then finding the root cause is a must. This process requires sophisticated data exploration tools that bring efficiency. These tools should also be able to connect back to any historical/relevant data that can answer any deviation or miss alignment with new data. If data cannot answer all the questions, then comes the interesting part of putting together the root cause analysis plan. All this is not only a time-consuming process but also demands costly resources.

    Visualization: Analysis and visualization go hand in hand. However, not all the tools are great at both the analysis and the visualization part. This pushes semiconductor data engineers towards exploring the data using different tools. In the majority of the cases, these tools are procured from the software data industry. But it also happens that the companies are willing to invest internally to come out with an easy visualization technique that can provide information as soon as the data is ready. This does require a dedicated team and tools that require capital.

    Monitoring: Monitoring is another aspect of semiconductor data. It can be about the process steps involved during the semiconductor fabrication or about all the equipment being used for semiconductor product development. Each of these data points has to be monitored in real-time to ensure that there are no miss-steps during the fabrication or testing part of the product development. The environment required to set up monitoring and capturing monitored data again demands investment.

    Storage: Given how time-consuming and costly the process to generate the semiconductor data is, it is very vital to ensure that every data point is stored for long-term usage. The next product development might 9from nowhere) require data from different products to establish a scale or reference. Doing so is only possible if the data is stored in a standard process and is easily retrievable. This is also the major reason to invest in long-term semiconductor data storage servers, which indeed requires long-term investment too.

    In today’s data-driven world, semiconductor companies have to invest in the resources required to drive each of the steps in the cycle. Capturing, analyzing, and storing data in the world of semiconductors is more vital given how difficult and time-sensitive the product development process is.

    Without thoroughly analyzing the real silicon data, it is impossible to conclude whether the product is going to be a success or not. This is why data is one of the major building blocks of the semiconductor industry.


    Picture By Chetan Arvind Patil

    Picture By Chetan Arvind Patil

    THE REASONS FOR COSTLY SEMICONDUCTOR DATA

    The life cycle of semiconductor data presents a clear picture of the resources required to capture the relevant semiconductor data points. As the industry moves towards much smaller and advanced technology-node along with innovative package technology, the cost associated with it will also be on the rise.

    All these developments will have a major impact on the resources required to capture the semiconductor data, as each new semiconductor technology development will demand upgraded FABs and OSATs along with all the support tools and other hardware resources to equipment.

    Below is the major reason why the costly semiconductor data is here to stay and how it is also impacting today’s node process and packages out in the market.

    Equipment: The different sets of tools and equipment required to drive the successful development of a semiconductor product is key to generating relevant data. Each new design methodology, technology node, and packaging process demands new sets of equipment. This adds to the cost of development and leads to FAB/OSAT upgrade or expansion. This added cost is necessary to ensure that the semiconductor data can be generated and analyzed successfully. Thus showcasing clearly why the semiconductor data cost is on the rise.

    Data Tool: The raw data is the first step towards analyzing how the product is behaving on the real silicon. To take a step further, investment is required to procure advanced data analytics tools. The feature-based subscription cost associated with it is also on the rise and is heavily impacting the data analysis part. On top of this, every other year there are a new set of programming solutions that pushes semiconductor data engineers towards a new way of analyzing data. This also requires investment not only in the tool but also in the training and skill upgrade part of it.

    Skills: To make the most of the data also demands skills that take years to master. In today’s day and age, the explosion of new technology (on the software side) is also pushing engineers to capture new skill sets on the go. This requires companies to not only invest in core product development resources (FAB to OSAT) but also in people who can explore data with limited information and present the full picture.

    Resources: Apart from human resources the data also demands a unique support environment. This can be a factory setup that enables data generation to a data warehouse that is storing all the data-related information. Such resources require a dedicated knowledge team and tools. All the cost associated with such process ends up producing the relevant semiconductor data. Without resources, it is impossible to do any task (not just data exploration).

    Process: Technology-node to package to materials (and beyond) all go through a different life cycle and process. This process involves working in a dedicated lab that does require a unique set of tools. To ensure the process are right the tools have to be efficient and the combination of process and tools eventually leads to trustworthy data. The journey of capturing the semiconductor data is thus heavily dependent on these costly processes.

    Research: To drive next-gen FETs and XPUs, continuous research is required and it also demands data to validate the new technology/solution. This means a dedicated setup/lab with the next-gen process, equipment, and data tools. All this adds to the costly process of generating the data for research and development activities.

    The journey of semiconductor data is very interesting and high-tech. It certainly involves a lot of processes and steps that are dependent on different facilities, equipment, and human resources. As long as the goal is to come up with a new silicon solution, all these semiconductor resources will keep demanding high investment, and in the long run, it is the right thing to do.

    The growing importance of semiconductor solutions in every aspect of life is also raising the question as to whether the semiconductor data is the next-gen oil.


  • The Cloud Is Changing Semiconductor Industry

    The Cloud Is Changing Semiconductor Industry

    Photo by Kvistholt Photography on Unsplash


    THE IMPACT OF CLOUD IN SEMICONDUCTOR INDUSTRY

    In the high-tech industry, software plays a crucial role. The ability to manage the task with a click of a button has a profound effect on different day-to-day activities. Software-driven digitization is also a key enabler in achieving productivity, which in turn allows companies/industries to capture the market worldwide.

    Over the last few decades, the software delivery model has evolved. From desktop to browser to smartphone apps to cloud, the different software delivery models have increased productivity. Industries worldwide have realized the potential of software-driven solutions and have rightly adopted strategies to maximize software deployment via the cloud. The same is true for the semiconductor industry.

    An advanced industry like semiconductor, where a single mistake can amount to zillions of dollars in losses, certainly needs solutions that can help plug any gaps in the product development process. These solutions can range from both the technical and business aspect of the product life cycle.

    To achieve high design and manufacturing standards, the semiconductor industry also has been heavily relying on cloud-based software solutions. This is why over the last couple of decades all major segments of the semiconductor industry have adopted cloud strategy.

    There are certainly cases where the transformation is happening today, but irrespective of the digital transformation journey, semiconductor design, and manufacturing houses realize the importance of hoping on to the cloud solutions and take advantage of every detail by capturing and connecting all the possible data points.

    There are numerous ways in which cloud strategy is impacting the semiconductor industry:

    Optimization: To track all the project and product details, cloud-based solutions play a vital role. The tools provide an ability to capture every change (technical and business) and ensure that the information is available on the go. This way of capturing the details allows optimized operations that ensure that the design to manufacturing flow is well connected to the specifications and the execution flow.

    Defect And Error Free: By making use of smart and connected equipment with cloud-powered data analytic tools, semiconductor companies can ensure that the end product is defect-free. Deploying smart cloud tools that can capture gaps in the process/recipe ensures that there are no errors in the manufacturing flows.

    On-Time: Delivering products on-time to the customer is key to capturing the large market, and it often requires swift coordination between multiple cross-functional teams. To connect and share the information seamlessly needs cloud-powered data-backed solutions that can track every detail from forecasting to material handling. Capturing the minutest of the details ensures that the product reaches the end customer on time.

    End-To-End Process: Semiconductor industry is built on top of several segments, which includes the design houses to equipment manufacturers to FABs to manufacturing OSAT sites, and many more. All these individual segments should be connected end-to-end to provides a holistic view of how the product development, manufacturing, and delivery flow. This is where cloud-based solutions are useful, and such solutions also bring transparency.

    To create a robust and high-quality semiconductor product, numerous data points are required. This is possible only with the help of tools and solutions that connect and provide a detailed view of different stages of the product development cycle. This is why a cloud-based end-to-end solution is heavily used in the semiconductor industry.

    As more FABs and OSATs get established in different parts of the world, the cloud strategy should be one of the top priorities. Doing so, will only increase productivity and enable a better customer experience.


    Picture By Chetan Arvind Patil

    Picture By Chetan Arvind Patil

    THE USE OF CLOUD IN SEMICONDUCTOR INDUSTRY

    Cloud-based solutions are used at every step of semiconductor product development. The efficiency and productivity such tools bring to different stages of product development, ensures that the end-product is defect-free.

    Different segments of the semiconductor product development require a unique cloud strategy. These can range from specific EDA tools to massive data storage instances. In the end, all these data points are connected/tied to a specific product. This way any team from anywhere can access the needed data/information to execute the task at hand.

    Research And Development: Designing devices to form the circuits and layouts that will eventually get fabricated into a silicon product, is the first step towards developing a turnkey semiconductor solution. For several years, the semiconductor industry has relied on software-based solutions. In the last decade, the same software has moved from desktop to cloud. This has enabled designers and researchers to access files and libraries on-the-go. On top of that, it has taken away the need to have high-performance systems that were earlier required to carry out tons of simulations. Today, one can simply deploy and get on with other work while the simulations are running on the cloud. Using the cloud, companies can ensure that there is no IP infringement as the check can run on the fly and tools can compare design solutions against the massive poll of cell libraries. All this ensures that the right product is developed by following the industry-standard protocols.

    Data Analysis: Data generation occurs at every stage of silicon development. Whether it is simulations in our design stage or fabrication/testing in FABs/OSATs. This is why advanced statistical analysis is carried out on every data point that is generated. Doing so on the fly means deploying solutions that can run closer to the data generating source. Also, if the data source is error-free, then data engineers can question it and take a deeper look by using analytical tools. In both scenarios, highly advanced tools are required. This means making use of cloud-based solutions that are easy to access and are loaded with features to enable accurate data exploration.

    Supply Chain Management: Delivering products to the end customer often requires connecting several dots from different systems. This is why supply chain management is needed. It ensures the product and its processes are tracked using the unique bill of materials. This often requires relying on specific cloud-based tools that can swiftly retrieve any information related to the products to provide its full history from inception to delivery. Such a task without cloud tools will certainly invite errors.

    Market Analysis: Capturing market trends to understand which products will give maximum profits is key to success. This often requires capturing different data points and then aligning the product roadmap as per the market (which means the customer) requirement. This ensures that the CapEx is diverted towards high revenue products. Such planning and projection are not possible without capturing different data points and customer developments. This is where cloud-based market analysis comes in handy and ensures that the projects are profitable.

    Resource Management: Semiconductor equipment, tools, and labs require high CapEx and such investment is viable only if there is a high ROI. To achieve positive ROI, the resources need to be looked after and that requires efficient management. This means periodic maintenance to ensure minimum downtime. For such management, cloud-based solutions are deployed so that whenever tools go down or require maintenance, the data can be captured to raise the alert beforehand.

    Factory Operation: Running FABs and OSATs 24×7 is key to ensuring that the facilities breakeven as quickly as possible. This requires capturing second to second activity of every corner of the factory. This is possible only when a connected system is deployed that can raise alarm in case of downtime and also provide remote status of every piece of equipment and tools. Cloud-based solutions already play a key role in this can thus are heavily used by semiconductor FABs and OSATs.

    Logistics: Shipping is a big part of the semiconductor product development cycle. The wafers often come from a different outside supplier and then get fabricated at a different location, only to get tested in a different part of the world. This means optimized logistics and real-time tracking of material. While the majority of the logistics solution providers are heavily using cloud-based solutions to track and deliver packages, it so often happens that the semiconductor companies also have to invest internally in systems that can raise shipping labels apart from tracking where the customer deliveries are. This is why cloud-based logistics solutions are handy for semiconductor companies.

    Archiving: Saving every data point related to the product is vital. This data ranges from design files to test data to financial records. All these need to be stored for a very long term so that whenever a query or comparison has to be done, the data can be easily retrieved. Quick retrieval and analysis are only possible if cloud solutions are deployed.

    The use of the cloud in the semiconductor industry will keep growing. In the next few years, more sophisticated and smart tools will be deployed. Factory operations are already utilizing high-tech cloud solutions for product fabrication. Design and Supply Chain Management are not behind. The cloud market for the semiconductor industry will only keep growing with more new players entering the product development every year.


  • The Role Of Root Cause Analysis In Semiconductor Manufacturing

    Photo by Mathew Schwartz on Unsplash


    THE IMPACT OF SEMICONDUCTOR ROOT CAUSE ANALYSIS

    Delivering high-grade products is the ultimate goal of every manufacturer. To do so, different industry-specific standards and processes are used. The same applies to the semiconductor manufacturing industry.

    In semiconductors, as the technology-node shrinks, the cost to manufacture tiny silicon increases too. To ensure there is no waste of materials, time, and cost, several different strategies are deployed to screen the part before it moves ahead in the fabrication/manufacturing line.

    One such strategy is root cause analysis. It so often happens, that the product being developed might encounter an issue during the qualification, testing, or packaging stage. Does not matter where the failure occurs, it is critical to understand the root cause. For an industry like semiconductor manufacturing, where every failing part can jeopardize not only the product quality itself but also the system to which it will eventually get glued in to. Such a scenario can certainly lead to millions of dollars in losses apart from a negative business reputation.

    This is why holding on to every failing product and carrying out detailed root cause analysis is one of the major pillars of the semiconductor industry.

    Over the last few decades, as the devices are getting tinier than ever, the importance of root cause analysis is growing due to the several impacts it has on the product:

    Cost: Finding the root cause of why the product failed, empowers the team with data points to take necessary actions. It so often happens that the root cause analysis is more about setup or lab where the qualification or testing is being carried out and has nothing to do with the product itself. All root cause analysis and actions eventually can save cost by eliminating the need to re-do either the design or the qualification. Root cause analysis of failing products also ensures that the product will not fail in the field, thus saving years of investment.

    TTM: Given the stiff competition in the semiconductor industry, the ultimate goal of every design and manufacturing house is to ensure that the product is launched within the planned time frame. In case of failures, root cause analysis provides a way to capture any severe design or process issue early on. This allows course correction to ensure the product still makes it to the market in time.

    Quality: Qualification is an important process before the product gets released for production. Root cause analysis of any failing product during the development stage ensures that the product meets the high-quality industry standards. The standards increase as per the target domain and so does the importance of root cause analysis of the failing product/part.

    DPPM/DPPB: Defective Parts Per Million/Billion is an industry-standard metric. The goal of semiconductor manufacturing is to lower DPPM/DPPB. In case of field failures, root cause analysis comes in handy, as it ensures the severe fails are captured to lower the DPPM/DPPB further. More so, when the product/part will be used for critical applications like automotive or wireless communications.

    Root cause analysis has a major impact on semiconductor manufacturing. With new FETs and equipment launching every year, the role played by analyzing any smaller to big product-related failure is vital for both FAB and FAB-LESS companies.


    Picture By Chetan Arvind Patil

    THE STEPS OF SEMICONDUCTOR ROOT CAUSE ANALYSIS

    To capture the root cause that leads to the failure of a product, several steps are followed. Each of these steps plays a key role in providing data points to establish the root cause to drive defect-free manufacturing.

    Inspection: Any given part/product that fails first and foremost has to go through a detailed inspection. Depending on the state of the product (whether in wafer or die or assembled packaged form), the inspection is performed. The images generated by these are then used to establish if any non-technical cause (handling, stress, recipe issue, etc.) leads to the failure of the product. X-Ray and SEM are widely used as these processes provide internal product details.

    Data: The data points for the failing part/product come from different stages. All the parts have marking to trace their origin. As a starting point, the first data point is collected by testing the product without damaging it. Another data point can be inspection data that FAB and OSAT might have on the failing part/product. Apart from these two, several other data points like recipe error, handling issue, the material used and many more, are captured to establish whether the root cause can be determined based on the data itself.

    Testing: Testing can be done performed on: ATE and BENCH. On ATE, the part/product is tested to capture failing tests. This provides an early hint about which blocks of the part/product are failing. In case, there is no firm conclusion, then BENCH validation allows a way to find if the part indeed failed or there was a setup error. In many cases, different testing scenarios are also used to capture testing-related data points.

    Reproduce Failure: This is the basis of the root cause analysis. Once the failure mode is understood, then the setup is replicated with an exact error to understand if the part/product repeatedly fails or not. This often requires a good setup and good parts/product, with the only varying parameter being the setup to capture the failing scenario/setup.

    Localization: Root cause analysis can also lead to the discovery of issues in the product itself. This often requires the team to perform the root cause analysis to establish the location at which there are design issues. This is achieved with the help of all the above steps and most importantly using the equipment to show that when the part/product is biased, then it fails due to a hotspot found on the specific location of the layout. Advanced equipment are used to capture such data.

    Documentation: In the end, everything needs to be documented. Documentation is a major part of not only capturing the reasons for part/product failure but also enables cross-functional learning. In the future, the documentation can be used to minimize the efforts to establish root cause in cases where the failing trend is similar. 8D and other problem-solving methods are used to document detailed root cause analysis.

    Finding the cause of failure is like finding a needle in the haystack. The complexity the products are bringing, (due to advanced technology-node) is making root cause analysis an important and also difficult task at the same time.


    Picture By Chetan Arvind Patil

    THE PILLARS OF SEMICONDUCTOR ROOT CAUSE ANALYSIS

    Performing root cause analysis is certainly a team effort. Experienced people play a vital role in reducing the time it takes to establish the root cause.

    However, there are two major pillars without which root cause analysis is not possible. These pillars also require massive investment to establish and often have a high operating cost.

    Lab: A dedicated space is required where experienced engineers can take the failing part and establish the root cause by biasing the part in a controlled environment. This requires skill-based training and the ability to understand any failing product/part that comes into the lab.

    Equipment: To run a root cause analysis lab, advanced equipment are also needed. These equipment can perform detailed inspection, apart from carrying out different setups/testing to reproduce the failure.

    The majority of the semiconductor companies have an in-house dedicated lab to perform root cause analysis. Some companies prefer to outsource. Eventually, it is all about managing resources and lowering the cost without compromising on the DPPM/DPPB.

    As the industry moves towards more complex products that will power everything everywhere, capturing the root cause of every failing product during development or the production stage, will become more vital than ever.


  • The Challenges And Way Forward For Computer Architecture In Semiconductor Industry

    The Challenges And Way Forward For Computer Architecture In Semiconductor Industry

    Photo by Luan Gjokaj on Unsplash


    OVERVIEW

    Computers are designed to provide real-time feedback to all user requests. To enable such real-time feedback, Central Processing Unit (CPU) is vital. CPU is also referred to as processing units or simply processors. These incredibly small semiconductor units are the brain of the computer and are capable of performing Millions/Billions of Instructions Per Second (MIPS/GIPS). High MIPS/GIPS, means faster data processing.

    A lot of processing goes on inside these processing units. With the advancement of the technology nodes, more processing units are being glued together to form System-On-A-Chip (SoC). These SoCs have different individual units like GPUDRAMNeural EngineCacheHBMASIC accelerators, apart from the CPU itself.

    It is incredibly difficult to design an SoC that has the best of two important worlds of computer architecture: Power and Performance.

    Both in academia and the industry, Computer Architects (responsible for design and development of next-gen CPU/SoC) play a key role and are often presented with the challenge of understanding how to provide faster performance at the lowest power consumption possible. It is a difficult problem to solve.

    The battery technology has not advanced at the speed at which SoC processing capability has. Shrinking technology node offers opportunities to computer architects to put more processing power, but at the same time, it also invites issues related to the thermal and power budget.

    All this has lead to semiconductor companies focusing on design challenges around the power and performance of the SoC.


    CHALLENGES

    Semiconductor industry has been focusing on two major SoC design challenges:

    • Challenge 1: Efficient and low latency SoC design for portable devices
    • Challenge 2: High throughput and performance oriented SoC for data center

    Picture By Chetan Arvind Patil

    Challenge 1:

    • Portable:
      • Portable devices suffer from the constraint on the battery capacity. The battery capacity has been increasing mainly due to the shrinking board inside these devices due to the shirking transistor size.
      • This has allowed the OEMs to put more lithium-ion. However, to balance the form factor and portability, batteries cannot be scaled out forever. It is a challenge for OEMs to understand how to manage portability by balancing the battery size apart from making the computer system efficient with low latency.
    • Efficiency And Low Latency
      • To tackle efficiency and low latency, innovative designs are coming out in the market with the ability to adapt the clock and voltage domain depending on the application being executed by the user. It is no more about how many cores are in the SoC, but more about how an application-specific core can provide a much better user experience than ever.
      • This has presented researchers with an interesting problem of improving the performance per watt (PPW). To improve PPW, researchers around the globe are taking different approaches around DVFS schemes, apart from improving transistor level techniques.
      • Frequency and voltage level scaling also has a direct impact on the response time. Processing units like CPU are designed to provide low latency so that all the request coming in, can be catered to in real-time.
      • Improving efficiency without compromising on the latency is still a big challenge for the computer architects.

    Challenge 2:

    • Data Center:
      • On the opposite pole, data centers are designed to be compute-intensive. The SoC required to cater data center has exactly the opposite need compared to portable devices. As companies become data aggregators, the analysis requires dedicated hardware that provides streamlined computation of the data on the go.
      • This is prompting companies like Google, Facebook, and Amazon to come up with their silicon that understands the data being generated and how to swiftly analyze it on the go.
    • Performance And High Throughput:
      • Designing custom SoC requires a fresh look and is drastically different than the block based approach. Improving throughput requires high speed interconnect to remove bottlenecks in data processing, else the performance will be affected.
      • In order to improve throughput, the data needs to reside near the computation block. This demands a new way to predict data to be used in order to bring in the cache or add a memory hirerachy with the help of MCDRAM.

    The challenges are many and researchers around the globe are already working to provide elegant computer architectures both from academia and the industry.


    WAY FORWARD

    As the need of the application running on the computer systems is changing, so is the approach to designing SoC. Various examples from different companies show how the development of computer architecture is changing and will eventually help others come up with new computer architectures.

    These new architecture designs are taking the traditional approach of computer architecture and providing a different way to tackle both memory and compute bottlenecks.

    Cerebras came up with Wafer-Scale Engine (WSE), which is developed on the concept of fabricating full wafer as a single SoC. The performance data of WSE show a promising future of how computer architecture becomes more wafer-level designing than die level. WSE also takes different approach on interconnects by utilizing wafer scribe lines to transfer data which provide more bandwidth.

    Fungible’s Data Processing Unit (DPU) architecture is another way forward that shows how SoC will be increasingly get designed for scale-out systems to handle massive data.


    Picture By Chetan Arvind Patil

    Google’s TPU and Amazon’s Inferentia shows how custom ASIC based SoC will become de-facto. Companies that generate a lot of data will try to run their center on in-house developed SoC.

    Apple’s M1 launch showed how ARM will start eating the x86 market for energy-efficient portable devices. In few years, the integration will become more intuitive and might attract other x86 portable devices OEMs who have failed to take Windows on ARM to its true potential.

    NVIDIA’s bid to acquire ARM shows that the future GPU will be designed with a blend of fusion technology that will combine ARM/CPU with GPU more than ever. This will allow data centers to improve on latency apart from focusing on throughput.

    In the end, all these are promising development for the computer architecture community. Provides numerous opportunities to research and develop new ways to enable lower latency and higher throughput while balancing power consumption.


  • The Smart

    The Smart

    Photo by Rahul Chakraborty on Unsplash


    THE SMART

    As technology is progressing, the world is becoming smarter. The decision making is becoming more data-driven rather than experience-driven. People around the globe rely more on smart systems to find solutions to their daily problems. With the proliferation of Artificial Intelligence and its influence on day to day life, the world is only going to become more reliant on smart services and products.

    The smart software and hardware systems have already found its way into every consumer product. Cars are becoming more connected. Homes are becoming more energy-efficient due to data-driven decisions. Logistics and transportation are data-enabled too. All this has enabled companies to spend wisely, while being profitable at the same time.

    The next decade is going to see the wider adoption of smart devices. The impact of these devices is going to enable a smarter ecosystem. Software companies are also launching smart hardware, which is also helping in the growth of the smart ecosystem market.

    There are certain key areas where smart technology is going to enjoy an exponential growth.


    THE SMART KEY AREAS

    Major areas where the smart technology is going to be more profitable are:

    • Smart Data
    • Smart Environment
    • Smart Manufacturing
    • Smart Transportation

    Smart Data: The systems that are being deployed across the cities, offices, houses, industrial areas, etc., are by default being designed to monitor the surroundings. The major goal of these systems is to capture the data in the cleanest form possible. The subsequent system doesn’t have to post-process the data and this ensures that the decision is provided in the shortest possible time. The data collection, processing, and the presentation are going to be the critical piece in order to classify a system as smart data ready. Smart data has already seen tremendous growth in the last decade and promises to be on the same path.

    Smart Environment: In the last decade, as technology innovation has progressed, so has the use and deployment of it. The turnkey infrastructure projects have embraced the new possibilities that smart solutions are capable of providing. The buildings are becoming more sensor-driven. The cities are becoming more connected. The open spaces are more secure due to smart security cameras. The schools and offices are more eco-friendly. All this is becoming possible due to the efficient use of spaces that are being created with the usage of the smart systems, which can project and provide an optimized solution against the capital expenditure. The net-zero concept is the main driver in enabling the smart environments around the cities and countries. With new infrastructure projects, the smart environment domain is only going to enable the growth and adoption of the smarter technologies.

    Picture By Chetan Arvind Patil

    Smart Manufacturing: Manufacturing is hard. The time and effort required to build a product involves a lot of steps and resources. Any company that is into manufacturing has one major goal: eliminate waste. The waste can be at any stage from the procurement to development to delivery. Money saved in manufacturing without compromising the quality is money earned. Companies are relying more on the robotic decision (while balancing human resources) to optimize the manufacturing process. Smart manufacturing is also relying on artificial data decisions to make a more profound judgment based on the market need, in order to manufacture the products efficiently. Industry 4.0 is here, but in a few years time, the world will move to Industry 5.0, which will rely more on smart manufacturing. As the factories start to invest in smart manufacturing to reduce waste, the opportunities for the smart solution providers will also grow. It has already started happening in automobile and semiconductor manufacturing.

    Smart Transportation: It is human nature to move from one place to another in the search of better opportunities. Uber and Lyft have already provided a sneak peek on how future transportation is going to be. With Waymo expanding the driverless riding services, more driverless cars will inevitably be seen around. This points out how the world is going to adopt smart transportation that is connected and statistically geared to be safer than human-driven cars. The logistics domain is also going to adopt to these smart technologies to save on the cost and become more profitable. As more companies and startups put in talent to make vehicles ecosystems smarter, the opportunities in this area will also keep growing.

    These are the four key areas, where the smart ecosystem is enjoying (and will keeping doing so) faster adoption and positive growth.


    THE SMART FUTURE

    The smart solution heavily relies on both the smart software and smart hardware.

    Smart software: In the last decade, software has become more advanced that ever. The machine learning, deep learning, and artificial intelligence solution created on top of the vast amount of data collected due to internet adoption, have ensured that the systems can understand the need before it is needed. As more people come on board the online world, the growth and usage of the smart software is also going to increase.

    Smart Hardware: Hardware development has kept the pace with the software, however, the hardware innovation has always relied on massive systems that are power-hungry. The supercomputers are capable of providing solutions in seconds, but that comes at a steep cost. Slowly, the hardware is also getting embedded with artificial intelligence, at the architecture design stage, to make it more adaptive and thus ensuring smart solution at low cost. The possibility of performing massive computation at source is going to make the computer systems more smarter and faster than ever.

    It will be interesting to see how the growth in the smart software and the smart hardware solutions in the next decade is going to shape the smart world.


  • The HaaS

    The HaaS

    Photo by Taylor Vick on Unsplash


    The software business delivery model has been constantly changing. It has adapted the need of the market by leveraging all the possible ways to deliver the software solution hassle-free. From installing software using a CD-ROM or USB Flash Drive to installing over the internet, the ease of accessing and using software has changed a lot.

    The customers have also adapted to the changing landscape. From worrying about configuring the license key correctly to transitioning to subscription (monthly/yearly) model has provided numerous benefits.

    One major change in the software delivery model has been the cloud services, which has pushed the applications from the desktop to the browsers. This has greatly eliminated the need to configure the operating system and environment settings required to ensure that the application works flawlessly. Today, with the click of a button, one can securely log on to the website and access the software tools using any browser without worrying about the underlying operating system.

    While the software has certainly made great progress, hardware has not been behind. The sole reason one can access software tools remotely is that the data centers located in the different parts of the world are working in harmony. This is to ensure that all the requests are processed with zero downtime and minimal delay. The underlying network of hardware ensures that, low latency is not a hindrance in accessing the software features. This has removed the need to maintain self-hosted servers and has allowed customers to instead invest in other critical solution to make the day to day task more productive.

    The software licensing and delivery model today is termed as Software-As-A-Service (SaaS). It is a subscription-driven model where the application is hosted on a server and can be simultaneously accessed by all the subscribers without any resource constraints. The server will have all the software dependency pre-configured to let the developers focus on the delivery.

    To run the SaaS model, a set of hardware tools are required. Instead of spending millions of dollars on the hardware infrastructure and maintenance, many enterprises and solution seekers have moved to the hardware licensing and delivery model and it is termed as Hardware-As-A-Service (HaaS).

    The major difference between SaaS and HaaS, is the application. SaaS is primary all about software, while HaaS is not just about computer hardware and system but also about all those smart hardware solutions that are running the SaaS.


    THE HaaS APPLICATIONS

    The application areas of any given product is what differentiates it from competitors. In the last decade, the HaaS application has increased and many smart hardware providers have moved to the product-based service model.

    The important application area of HaaS has been the data center, where cloud service providers (Amazon, Google, Microsoft, etc.) and content delivery networks (Akamai, Cloudflare, etc.) have created plethora resources to cater the growing need and demand of software enterprises. Anyone can rent as many nodes required and deploy a solution. The shared and dedicated website hosting also falls into the same category. Renting out per month basis than buying the hardware and setting up in the office is more cheaper and reliable. HaaS (Hardware == Data Center) also eliminates the cost of setting up a dedicated team to handle all the data center related issues.

    The growth of smart devices has lead to change in the way consumers consume these products. Taking the queue from the smartphone business, it is evident that a new version of the smartphone is launched every year. This automatically prompts consumers to buy a new one due to the attractive features. Shelling out more than $500 every year on a smartphone is not what every consumer would like to do. To tackle this issue, the service providers (mainly cellular one) moved to the HaaS model (Hardware == Smart Device), in which they started providing the smartphone as part of a monthly plan than paying upfront. It certainly has pros and cons, but has provided consumers the ability to switch to better devices as and when required. This smartphone subscription model is now being extended to several other smart devices like cameras, drones, watches, security, T.V., and the list is endless.

    Picture By Chetan Arvind Patil

    Mobility is a very crucial part of the day to day life. In the transportation area, the HaaS model (Hardware == Vehicle) has been in use for decades. The model of renting the vehicle for a specific period is well proven and widely used. Due to the proliferation of vehicles for hire services, the HaaS model is being applied more relentlessly. From car to skateboard to electric bike to bicycle, now everything is available under the HaaS model. The growth of point-to-point will keep extending the application area of the HaaS within the mobility domain. Gen-Z and Gen-Alpha are going to mostly rent the vehicles under the HaaS model than spending money to purchase one.

    As countries around the world move towards 5G and Wi-Fi 6, the digital landscape will also change as more consumers will have the ability to connect to the online world. This will demand a vast array of the internet of things that needs to be deployed across cities, states, and countries. The business (internet service providers) will unlikely follow the trend of settings technology on their own. This is where the HaaS (Hardware == Internet of Things) model will come in as a way to save cost while providing services.

    Apart from the discussed application areas above, there are still miscellaneous domains where the HaaS model can be applied. It is already in use in the airline industry, where purchasing air crafts has moved to a year-long rental agreement. A similar concept will start to fill in the developing world where services are more vital now than ever.


    THE HaaS BENEFITS

    Technology when applied correctly, provides numerous benefits. SaaS showed many benefits to the software world. Same applies to the hardware world due to the HaaS implementation.

    Cost is one of the major benefits of HaaS. The ability to rent out as many nodes for as many days required has provided new way for consumers to manage the cost. In many cases, consumers can spend money wisely in other critical areas. The ability to terminate and forget about the infrastructure is also one keys to why the HaaS model is getting popular.

    The HaaS provides a way to access services anywhere. From data centers to mobility all are available at the click of the button. The HaaS will be deployed and available at the doorsteps. Majority of the cities around the world are already equipped with several HaaS services and this has provided reliable uptime to the services.

    Picture By Chetan Arvind Patil

    Portability is another important benefit of HaaS. The option to switch smart devices to a new one without worrying about the cost is one example. Even with data centers, one can move from one cloud HaaS provide to another, without the need to understand the underlying process and technical challenges.

    With SaaS, the quality and reliability of software services improved. Always on services and customer support was a great addition to the SaaS model and it has ensured that the customers are never out of help. The same quality and reliability solutions have been extended to the HaaS model and are taking the customer experience to a new level.

    Digital transformation, expanding high-speed network, and advancement in the semiconductor solution is only going to improve the HaaS experience.


    THE HaaS FUTURE

    Given the expansion of the artificial intelligence and autonomous solutions, it is highly unlikely that the HaaS delivery model will change much from the existing one.

    The HaaS in the future will keep evolving around the following three important aspects:

    • Users
    • Middleware
    • Services
    Picture By Chetan Arvind Patil

    Users are the consumers for the HaaS model and paying for the service. Middleware is the connectors between the users and service providers and gets the share of both sides of businesses. Services are the different solution providers with innovative services and products.

    It will be interesting to see if the industry moves to a new way of subscribing to the services. Currently, the business is moving around pay as a use model, and it has worked wonders both for the service providers and the consumers. However, every model evolves, and with the rate at which the technology is advancing, it will be important to adapt the business model accordingly. What new way of supporting these services will come, only time will show.


  • The Edge Computing

    The Edge Computing

    Photo by Tony Stoddard on Unsplash

    In the hardware systems, the cache is used by Central Processing Units (CPU) to reduce the time which is required to access the data from the lower level memory like RAM. It does so by bringing the data beforehand (depending on the policy used) in the cache, which re-sides closer to the CPU.

    This allows CPUs to perform calculations faster as the cycle time gets reduced and the latency is lowered. It also lowers the time and energy required to perform the computation task.

    Same concept is applicable for the software systems, in which caching of data leads to faster access which reduces the response time and improves user experience. Content Delivery Network (CDN) is one of the best examples of software caching. Even web browsers use it.

    Image
    Picture By Chetan Arvind Patil

    However, both the hardware and the software caching systems are good from a data point of view, but not for computation. In the hardware systems, data is brought closer to the CPU for processing while in software systems the data is brought closer to the user after processing. For both the systems, if the source data is updated, then new data has to be fetched, which adds cost, time, and energy.

    With the growing digital user base and improved connectivity around the world, the importance of caching is increasing to cater the data and computation demand.

    In order to serve this growing demand, the edge computing is being deployed. In a nutshell, the edge computing provides data and computation closer to the nodes requesting it with the help of widely deployed array of hardware. This leads to faster response time, reduces time to deliver the content, and provides massive distributed processing power.


    EDGE COMPUTING BENEFITS

    Large scale deployment of the edge computing is going to be a win-win situation for both the businesses and the consumers. The hardware and software deployment and development around the edge computing are also going to bring a tremendous amount of skill-based employment opportunities. The edge computing has numerous benefits. There are already several solutions from the semiconductor industry for the edge computing.

    One of the important benefit of the edge computing is the computation speed. The edge computing can distribute the single task across different edge nodes, which reduces time to accomplish task. With 5G and Wi-Fi 6, the off-loading of the task to nearby edge nodes is going to ensure low latency and reliable service.

    By reducing the number of hopes required to access the computation and data resources, the edge computing will also provide better security. The computation to be performed can be done by nearby nodes and the data to be accessed can also be stored at the same nodes. This process shields against any intruders trying to access the data by reducing the number of hops.

    Picture By Chetan Arvind Patil

    The edge computing is also designed to be scalable. There are no restrictions on the number of edge nodes that a specific network can hold, as long as the edge computing network is balancing the supply and demand requirement, and is not high on cost.

    Adoption of 5G will be a gradual process. Consumers will take time to upgrade devices that are 5G compatible. To cater to the previous generation (2.5G/3G/4G) technology users, interoperability is important. The edge computing is capable of providing interoperability, thus both new and legacy devices can work with the same edge node.

    The major application areas of the edge computing is going to be On-Device Artificial Intelligence (AI), which will allow smart devices (smartphones, cameras, sensors, etc.) to off-load the compute intensive task to the nearby edge nodes.

    The edge computing will take digital experience to the next level.


    EDGE COMPUTING OPPORTUNITIES

    Every new technological solution presents new opportunities for the businesses. Consumers also gain due to the new products and services. From a business point of view, the edge computing provides several ways to generate revenue.

    The edge computing will drive the introduction of new products. Both software and hardware solutions will be introduced. Businesses can explore the edge computing in form of Edge As A Service (EAAS) by allowing smaller companies access to the edge resources in order to provide over the top services. This will also drive revenue.

    Picture By Chetan Arvind Patil

    Computing at the edge also requires a networks of hardware that can accept the task request, process it, and send it back to the requester. To develop such smart hardware for the edge computing, the semiconductor companies are comping with low energy and high-performance electronic chips, which can used by the developers to launch new hardware products. All this opens new revenue opportunity for different segments for smaller, mid to high scale enterprises.

    The edge computing on top of the smart hardware will increase the adoption rate of the digital services. This will also lead to acquisition of new consumers and will open multiple revenue opportunities.


    The edge computing offers exciting benefits and opportunities. Both businesses and consumers are going to be at advantage.

    The major application areas that the edge computing is going drive are smart cars, smart cities, smart industry, smart manufacturing, and smart home automation systems.

    Picture By Chetan Arvind Patil

    It will be interesting to see how the industry comes forward and uses the edge computing to launch new digital solutions for the market.

    The country which will deploy and adopt the edge computing enabled 5G networks faster is going to be the leader in digital services and information technology for the next decade.


  • The Hearables

    The Hearables

    Photo by Yogendra Singh on Unsplash


    THE HEARABLES

    The hearables are the smart technology-enabled over-the-ear or in-the-ear devices that are mainly used for music listening. The natural language assistant has equipped the hearables with the ability to talk. These smart assistant can interactively talk to the user using the inbuilt speakers, and then can also listen to the user with the help of the microphone. The hearables also have inbuilt features like ambient sound control and active noise cancellation.

    The majority of the hearables use Bluetooth for wireless connectivity, which allows voice control of the smart phone with the help of the virtual assistant. Though not everyone is comfortable talking to these virtual assistants in public, however with the increasing use, it is becoming a new normal.

    Picture By Chetan Arvind Patil

    Every major smartphone and audio company in the world has launched its own hearables. Technology companies are merging the best of the audio, hardware, and software to create new solutions around the hearables. Google is deploying artificial intelligence-enabled voice assistant, while Amazon is focused on enhancing online shopping and audiobook experience with the help of the hearables.

    There are already numerous applications of the hearables and it will be exciting to see what the consumers get to experience in coming years.


    THE HEARABLES APPLICATIONS

    The application of any technological solutions requires multiple functions to work in harmony. First and foremost is the underlying hardware technology, the second is the connectivity (mainly wireless), third is the software systems running on top of the hardware and fourth is the application ecosystem driven by the developer community.

    Starting from pre-internet to the post-internet ear, all the consumer-oriented solutions have relied on these four major points for success. Whether it is Windows powering the majority of the world personal computer or Android changing the landscape of the mobile industry. The same is also applicable for the hearables, except that majority of these underlying solutions (hardware, software, connectivity, and ecosystem) already existed beforehand. With the hearables, the companies have simply taken the existing software/hardware infrastructure and extended it to the hearables. So far, the outcome has become anything but positive.

    Entertainment has always been the default application of the hearables. The small form factor and features like active noise cancellation has surely taken the audio experience to the next level. The applications on the mobile devices can now connect to the hearables via Bluetooth, which provides an ability to control different features using the smart gestures.

    Picture By Chetan Arvind Patil

    Another major application of the hearables has been to swiftly interact with the voice assistant. The offloading of tasks after sending instructions via the hearables has made the smart devices more interactive. It can be used to retrieve important information like the weather forecast, nearby places, and many other useful data. All this has been possible due to the underlying smart hardware that can capture and process the natural voice.

    The hearables also offer audio privacy and enable one to interact with the party on the other side without compromising the confidential information exchange. There are still gaps, mainly from the speaker’s point of view and it will take a breakthrough technology to overcome some of the gaps.

    Cross reality by default requires a pair of devices to connect with ears. The hearables are the perfect candidate due to the smaller form factor. The hearables mixed with cross reality is going to provide new avenues on how to project information in the virtual world.

    In the coming decade, the application areas of the hearables will surely expand.


    THE HEARABLES OPPORTUNITY

    The smart features requires the smartest of hardware and software.

    The hearables might be marketed by a specific company, but a lot goes inside these little devices which requires effort from several design, development, and manufacturing companies. The software part is still being handled majorly by the vendors selling the hearables, but the application ecosystem opens up a lot of opportunities for the developers.

    Hardware companies, mainly semiconductors, will have a lot of opportunities to launch new low power and smaller electronic audio/voice chipNoise cancellation system will have to keep improving to consider the work from anywhere scenario, which also opens up new opportunity and revenue for the semiconductor companies. Context-awareness is another promising feature, for which the hearables will require smart sensors in order to adapt to the environment.

    Wired charging will be phased out of the market in the next decade, and the hearables will also have to adopt wireless charging system. Already different wireless charging solutions are available in the market, but fast wireless charging will be the key until the battery technology evolves. A portable battery bank cum carrying case for the hearables will be another area that companies can target.

    Picture By Chetan Arvind Patil

    Software innovation will also open up new opportunities. It is expected that going forward the hearables will be able to connect with the cellular towers, and when that happens the services will have to be provided mostly over the voice. This will require smart applications to make most of the hearables. For example, the banking application on the hearables can provide details about the bank balance or initiate transfer with voice commands. 

    Voice assistants are already in use, but local language support will be the key. Providing local language support will increase the reach of the hearables. Audio control is already part of the hearables. Extending it beyond just the master mobile device will open up new ways to interact with the world. For example, controlling car infotainment using the hearables.

    Hardware electronic chips can only cancel 70-80% of the noise. In order to achieve 100% noise cancellation, software adaptive features are required, which can capture and filter out the noise. Innovation around the software noise cancellation will also lower the cost of the hearables, as it will eliminate the need to have a dedicated hardware chip inside the hearables.

    The hearables have opened new opportunities for both the hardware and software domains. The domain is expected to grow more in coming years.


    THE HEARABLES FUTURE

    Over the next decade, the adoption of the smart devices will increase, mostly in the developing market. With Google becoming more of an AI service business, many local vendors from the developing market can take advantage of its solution to provide locally manufactured smart hearables.

    It is inevitable that Android will also find its way on the hearables and it will make them a standalone device rather an add-on one. However, to make the hearables work as a standalone system, it will be critical to provide better power management. Dynamic task management can only drive efficiency to a certain extend and that is why the advanced battery technology is need of the hour.

    Picture By Chetan Arvind Patil

    The development of the hearables as a technology platform will demand different form factors. Advanced semiconductor process nodes along with flexible hybrid electronics will ensure that hearables become much smaller and flexible than they are now.

    Expansion of 5G small cell connectivity will ensure that the hearables can form a network that securely allows sharing of the information within a specific range. This will take the person-to-person voice communication to a new level and make the hearables smarter than ever.

    One of the major domains that every smart device eventually tries to target is healthcare. The hearables are yet to enter this market due to the limited use case. It will be exciting to see how the evolution of the hearables occurs over the next decade from healthcare point of view.

    The hearables have got lot of traction and have already captured the market faster than other smart devices. Next few years will be exciting as the innovation around the hearables grows.


  • V2X – Vehicle To Everything

    V2X – Vehicle To Everything

    Photo by chuttersnap on Unsplash

    Over the last half-century, vehicles have played an important role in everyone’s life. It has enabled point-to-point commute a faster and an efficient process. Whether one is traveling on a public bus or a private car, each of these vehicle types play a very crucial role in enabling ease of living.

    The last decade saw a drastic change in how the automotive industry manufactures vehicles. Vehicles of all forms are now much more fuel-efficient. Alternate fuel technologies are being used on a large scale. Features are provided that make vehicles safer and more aware. Enhanced digital experience by making most of the auto software tools built on top of the Android Auto and Apple CayPlay ecosystem.

    The next big change in vehicles is going to be on the data and communication technologies.

    On the data front, there are already many interesting hardware-based solutions that can capture the data generated by the vehicle. Then present it to the consumers and businesses to understand vehicle behavior and/or to track the vehicle usage. This is possible due to the standardization of the On-Board Diagnostics (OBD) and Controller Area Network (CAN). Data is also critical for the automotive industry to manufacture the products with zero defects.

    There are still many unexplored opportunities in the data domain due to the lack of continuous high speed internet access inside the vehicle. With 5G communication networks even vehicles can be equipped with the high speed internet. This will also allow many over the top service and will enable data driven innovation.

    The next decade is going to change how data is accessed within a vehicle due to the 5G expansion. Due to the different frequency bands 5G can run on, it makes it a perfect solution to enable real-time information access.

    While data and communication will play a major role in the automotive industry, it is also critical to understand the specifics of how this foundation will be laid.


    VEHICLE TO FUTURE

    The answer of how to make use of data and communication in automotive industry lies in the solutions provided under Vehicle To Everything (V2X).

    V2X holds big promises. All the sensors and the data points when combined ensure that vehicles are more secure and aware than ever.

    V2X already enables on the go real-time entertainment, but in the future with 5G and other network infrastructure, on-demand information and entertainment will go hand in hand.

    Picture By Chetan Arvind Patil

    The developments around V2X will provide second by second tracking of the logistics and will prealert any possible issues with traffic, vehicle, and any important information that may lead to shipment delays.

    V2X will also enable different levels of vehicle autonomy apart from securing pedestrians and bicyclists from colliding on vehicles. V2X will also increase the number of small cell networks by converting the vehicle itself as a cellular point of access.

    It is important to understand how V2X does so with the help of different technology domains within V2X.


    VEHICLE TO EVERYTHING

    Vehicle-To-Everything (V2X) is all about how vehicles are able to send the data out and get the data in with the help of advanced communication technologies. With 5G, there is going to be a rapid increase in V2X implementation. Every new vehicle in the market will then act as a data point, router, and network in itself.

    The X in V2X stands for the different technology domain a vehicle has started being part of.

    Vehicle-To-Network (V2N) – V2N is the foundation of V2X. With the help of widely deployed communication network within and outside the vehicle, high speed data transfer can occur. The important use of V2N is voice and data communication for real-time navigation and entertainment apart from the critical emergency roadside assistance. Internet access using hotspots also comes under V2N.

    Any other V2X technology domain is fully dependent on V2N as without a communication network (specially wireless), none of the other V2X implementations can work.

    Vehicle-To-Infrastructure (V2I) – V2I is critical for level 5 automation where along with LIDAR and other self-driving sensors the data based approximation also is useful. This way vehicles can distinguish between road and obstacles much better than just a single data point coming from the use of vehicle’s own hardware. This is done so by feeding the vehicle with live view of surrounding by communicating directly with near by internet and data nodes. This continuous data feed helps in decision making. Apart from self-driving, V2I can also communicate with the traffic control network directly. Using this information vehicle can adjust/advice speed to ensure that the traffic block miles away does not get more congested.

    Vehicle-To-Vehicle (V2V) – V2V takes the help of V2N along with V2I to capture information from vehicles in close proximity. With V2V, Vehicles can communicate about traffic, speed, collision, and distance. All these data points will enable much safer driving compared to the past decade. A self-driving vehicle is heavily dependent on its sensors to capture, decode, and react to the real time situation. If the same self-driving vehicle gets accurate input that makes the driving safer, then it will also increase the adoption and take safety to the next level. This is where V2V is important and helps drive zero accident occurrence.

    Picture By Chetan Arvind Patil

    Vehicle-To-Pedestrian (V2P) – V2P takes the approach of ensuring that the vehicle is always able to detect and take action in case a pedestrian or bicyclist comes in the way. V2P relies on V2N and V2I data points and then combines it with its data to ensure that any critical scenario like a pedestrian or bicyclist coming in front of the vehicle, is handled without comprising on the safety of the pedestrian or bicyclist and even the passengers. Most of the systems in the vehicle today are already equipped with processing capability to alert the driver for different collision scenarios, but with V2P the data point will allow another layer of security and ensures 100% accurate collision alerts by considering 360 degree view .

    Vehicle-To-Device (V2D) – V2D is already in use in most of the vehicles out in the market. With the help of Bluetooth communication any device can connect with the vehicle and do many tasks from answering calls to playing music to logging data via OBD. The next step in V2D will be to allow smart devices to make use of small cell 5G networks that every vehicle will come equipped with. This will allow vehicles to act as hotspots. V2D will also enable advanced secure keyless vehicle entry along with remote startup, locking, and tracking.

    Vehicle-To-Grid (V2G) – V2G is all about electric vehicles plugging into the electric grids. These grids can be in a parking lot, home, or a roadside charging station. V2G allows real-time tracking of nearby grids that can be used to charge the vehicles. The same grids can also perform a diagnostic check to ensure that the safety of the vehicle is not compromised and the battery to internal electrical networks is intact. Performing such smart checks with V2G will provide more safety than mile based servicing currently followed by the manufacturers. The data points gathered with V2G every time an electric vehicle is charged then can be sent to the central servers to process and understand usage of energy along with other data insights.

    V2X provide unlimited opportunity not only to the automotive industry but also to the hardware and software businesses. The consumers are also going to see rapid increase in the feature list. It will be interesting to see how automotive industry provides V2X features without making it too costly for the consumers. Otherwise, wide adoption of V2X will not be possible.


    Bloomberg technology has an interesting video on 5G and the Future of Connected Vehicles showcasing the importance of having a universal data and communication standard to enable growth of V2X.