Your Tech Story

nvidia

Nvidia

Nvidia short sellers lose $5 billion as shares rise more than 90%

As reported by financial data company S3 Partners, short sellers of Nvidia Corporation have suffered losses of 5.09 billion USD to date in the current year since the stock has increased by more than 90 percent.

According to the company’s Wednesday report, the stock is the top losing equity short that has occurred in 2023, which is followed by Apple & Tesla.

Nvidia
Image Source: finance.yahoo.com

According to the report, while the stock has increased approximately 30 percent in that time, Apple’s short sellers have suffered a loss of 4.47 billion USD up to this point in 2023. According to the article, Tesla’s short sellers have suffered a loss of 3.65 billion USD so far this year since the stock has increased by around 33 percent.

Also Read: Google Rolls Out Passkeys to (Eventually) Kill Passwords

For the year thus far, Nvidia’s short interest has decreased by 7.04 million shares or 18 percent. The percentage of float that is short currently stands at 1.32 percent, which is the lowliest level since October 2022.

Following a disappointing statement from Advanced Micro Devices, Inc. (AMD) late on Tuesday, Nvidia stocks were down 1.1 percent in noon trading on Wednesday, along with drops in other chip manufacturers.

Shares are borrowed by investors who offer securities “short,” anticipating a decline in the stock price that will allow them to repurchase the shares at a less expensive rate, give them back to the lender as well, and earn the difference in cost.

NVIDIA Corp. creates and produces chipsets, processors, as well as associated multimedia software for computers. Tegra Processor, The Graphics Processing Unit (GPU), & All the additional components make up its functional units.

The GPU market is made up of product brands such as GRID used for visual computing customers and is based on the cloud, Tesla along with DGX for AI data scientists & big data experts, Quadro for creators, and GeForce for gaming enthusiasts.

Also Read: IBM to pause hiring in the plan to replace 7,800 jobs with AI

The Tegra Processor section incorporates a full computer into just one chip containing multi-core central processing units and graphics processing units to power supercomputing for controllers & smartphone games and entertainment gadgets in addition to robots that are autonomous, drones, and even vehicles.

The compensation based on stock cost, business infrastructure, support costs, expenditures related to the acquisition, legal settlement expenses, and various other non-recurring charges is all included in the “All Other” division.

supercomputers

Are Google’s AI supercomputers faster than Nvidia’s?

As powerful models for machine learning continue to be the hottest topic in the tech business, Google released information about one of the company’s AI supercomputers on Wednesday, claiming it is quicker and more effective than rival Nvidia systems.

Tensor Processing Units, or TPUs, are artificial intelligence (AI) chips that Google has been developing and utilizing since 2016. Nvidia currently holds a 90% share of the overall market for AI training models and deployment.

supercomputers
Image Source: techzine.eu

As a leading innovator in AI, Google has produced several of the most significant developments in the area during the past ten years. However, some people think the company has lagged behind in commercializing its ideas, and internally, the corporation has been rushing to develop items to show it hasn’t wasted its lead, creating a “code red” situation.

Also Read: What’s Next for Users as Google Now Launcher Shuts Down?

A large number of supercomputers and a lot of processors must work simultaneously to train models, with the computers operating nonstop for weeks or months, as is the case with AI models and products like Google’s Bard or OpenAI’s ChatGPT, which are powered by Nvidia’s A100 chips.

On Tuesday, Google announced that it has developed a system with more than 4,000 TPUs connected to specialized parts intended to operate and train AI models. It has been in operation since 2020 and has been used for 50 days to train Google’s PaLM model, which challenges OpenAI’s GPT model.

The Google researchers claimed that the TPU-based supercomputer, known as TPU v4, is “1.2x-1.7x faster and uses 1.3x-1.9x less power than the Nvidia A100.” The researchers said, “The performance, scalability, and availability make TPU v4 supercomputers the workhorses of large language models.”

The H100, the most recent Nvidia AI chip, was not compared to Google’s TPU results, however, because the H100 is more modern and was manufactured using more sophisticated manufacturing techniques, according to Google researchers.

Nvidia CEO Jensen Huang said that the findings for the company’s most current chip, the H100, were noticeably faster than those for the previous generation. findings and rankings from an industry-wide AI chip test called MLperf were published on Wednesday.

Given the high cost of the significant computing power required for AI, many in the sector are concentrating on creating new processors, hardware elements like optical links, or software innovations that will lower the required computing power.

Also Read: Can we use nearby share between Android and Windows?

The computational demands of AI also benefit cloud service providers like Google, Microsoft, and Amazon, who may rent out computer processing on an hourly basis and give startup companies credits or computing time to foster business partnerships. For instance, Google claimed that their TPU chips were used to train the AI image generator Midjourney.

ai supercomputer

Nvidia and Microsoft Collaborate To Build AI Supercomputers

Nvidia and Microsoft are working together over a “multi-year collaboration” to build “one of the most powerful AI supercomputer in the world,” which will be capable of handling the large processing workloads needed to teach and scale AI.

ai supercomputer
Image Source: ciobulletin.com

According to the reports, the AI supercomputer will be driven by Microsoft Azure’s cutting-edge supercomputing technology along with Nvidia GPUs, networking, and a comprehensive stack of AI software to support businesses in training, deploying, and scaling AI, including big, cutting-edge models. 

NVIDIA’s A100 and H100 GPUs will be part of the array, coupled with their Quantum-2 400Gb/s Infiniband networking technology. In particular, this would be the first public cloud to feature NVIDIA’s cutting-edge AI tech stack, allowing businesses to train and use AI on a large scale.

Manuvir Das, VP of enterprise computing at Nvidia, noted, “AI technology advances as well as industry adoption are accelerating. The breakthrough of foundation models has triggered a tidal wave of research, fostered new startups and enabled new enterprise applications. Our collaboration with Microsoft will provide researchers and companies with state-of-the-art AI infrastructure and software to capitalise on the transformative power of AI.”

NVIDIA will work with Azure to research and accelerate advancements in generative AI, a rapidly developing field of artificial intelligence where foundational models like the Megatron Turing NLG 530B serve as the basis for unmonitored, self-learning algorithms to generate new text, digital images, codes, audio, or video.

Additionally, the companies will work together to improve Microsoft’s DeepSpeed deep learning optimization tool. Azure enterprise clients will have access to NVIDIA’s full suite of AI processes and software development tools that have been tailored for Azure. In order to speed up transformer-based models used for huge language models, generative AI, and generating code, among other uses, Microsoft DeepSpeed will make use of the NVIDIA H100 Transformer Engine.

With twice the capacity of 16-bit operations, this technology uses DeepSpeed’s 8-bit floating point accuracy capabilities to significantly speed up AI calculations for transformers.

Verge reports that due to the recent quick expansion of these AI models, there has been a major increase in the demand for robust computer infrastructure.

The partnership is intriguing for several reasons, but notably, because it makes one of the largest computing clusters accessible to companies. Due to optimizations made in the newest ‘Hopper’ generation of NVIDIA GPUs, this will not only enable companies to train and implement AI at a level that was previously impractically pricey to do but also allow them to do it at a significant level of efficiency.

Although NVIDIA has a supercomputer of its own, the collaboration demonstrates that they are aware of the enormous computational demands placed on them by contemporary algorithms. This development represents a collaboration between two of the largest organizations in the AI industry.

Microsoft has experience in this area, as seen by its relationship with OpenAI and dedication to the development of ethical and safe AI. Contrarily, NVIDIA has been a pillar of AI development and research for the past ten years thanks to its potent GPUs and supporting tech stack, which includes CUDA and Tensor cores.

Nvidia Hackers

190GB Of Data Allegedly Stolen From Samsung Leaked By Nvidia Hackers.

LAPSUS$, the hackers group responsible for the recent Nvidia data breach, claims to have hacked Samsung and stolen nearly 200GB of sensitive data.

The 190GB trove of exposed files includes source code for Samsung’s activation servers, bootloaders and biometric unlock algorithms for all recently released Samsung devices, and trusted applets for Samsung’s TrustZone environment. The leaked data is also thought to include Qualcomm’s confidential source code.

Members of the LAPSUS$ hackers group have claimed responsibility for the data breach, posting details of the data obtained in a Telegram channel and encouraging other members to “enjoy” the contents made available for Torrent download.

According to the message, the hackers also got “a variety of other data,” but the elements listed could put Samsung device users in immediate danger of being hacked or impersonated by cybercriminals.

Because the trusted applets (TA) source codes obtained by LAPSUS$ are installed in Samsung’s Trusted Execution Environment (TEE) known as TrustZone, the hackers – and anyone who has downloaded the Torrent files – may be able to bypass Samsung’s hardware cryptography, binary encryption, and access control.

Nvidia Hackers
Image source:

The total size of the leaked data is around 190GB, which LAPSUS$ divided into three compressed files, and the torrent has already been downloaded and shared by over 400 peers.

According to a Samsung spokesperson, the company “immediately after discovering the incident” took steps to strengthen its security system.

“According to our preliminary findings, the breach involves some source code related to Galaxy device operation, but no personal information about our customers or employees. At this time, we do not expect any impact on our business or customers. We’ve put safeguards in place to prevent future incidents, and we’ll continue to serve our customers as usual “, they had informed.

Source: www.itpro.co.uk

Qualcomm has yet to respond, and it’s unclear whether the hacking group had any demands for Samsung before leaking the private information.

Researchers discovered “severe” security flaws in a long line of Samsung flagship smartphones just weeks ago, which if exploited could allow attackers to steal cryptographic keys.

It also comes just five days after Nvidia confirmed that on February 26th, the LAPSUS$ hacking group successfully breached its systems and distributed 1TB of confidential company data, including security credentials for 71,000 former and current Nvidia employees.

The data was obtained through a double extortion scheme that entailed compromising a victim and stealing data before encrypting their machine, as well as threatening to leak the stolen data if the ransom is not paid. In the last year, the number of double extortion cases has increased, with one in every seven cases resulting in the loss of sensitive information.

It is worth noting that LAPSUS$’ attacks coincide with a spike in cyber warfare due to Russia’s invasion of Ukraine, yet the hacking group maintains that its actions are not politically motivated.

According to Matt Aldridge, principal solutions analyst at Carbonite and Webroot, “these gangs continue to be more inventive with the types of data and businesses they target,” similar to “most modern cyber attacks.”

“Given the high-profile nature of the victim, the hackers may have posted a message releasing Samsung’s data along with a snapshot of its source code in order to gain additional leverage in the event of a ransom demand. However, because the data breach has already occurred and the data has been exfiltrated, no ransom payment can ensure that all copies of the data are securely destroyed “, he said.

Nvidia

Nvidia finally acknowledged the GeForce RTX 3000 series global crisis publicly.

On 5th December 2020 during the Credit Suisse 24th Annual Technology Conference, Colette Kress, CFO of Nvidia acknowledged the global shortage of the RTX 3000 series. He went into some details about the reasons for the shortage as there have been several reports on what is causing such a crisis. Colette revealed in front of the media that wafer yields are not the only reason behind the global shortage.

During the webcast of the conference, Colette also mentioned that the production yield of Samsung 8nm nodes is one of the prime reasons for the decelerated rate of 3000 series production. Before this conference, Nvidia didn’t put any official statement acknowledge the global shortage of the RTX 3000 series.

Reason behind Nvidia GeForce RTX 3000

Colette Kress has given proper statements in the webcast regarding the shortage of the RTX 3000 series. Apart from revealing that the yield of Samsung 8nm nodes is one of the reasons for the shortage, other topics on the same group have also come up. Kress further said that the company is currently facing supply constraints that go beyond just the wafers and silicon supply. Some constraints are also in substrates and components but no details were discussed. The company will continue to work in this crisis and it believes that the demand will exceed supply in Q4 for overall gaming.

Nvidia
Image Source: indianexpress.com

From what Colette said in the webcast it is clear that the low production rate of various components is one of the reasons the production rate of GPU has gone down. Till now, Samsung was the sole provider of Nvidia GeForce RTX 3000 series 8nm nodes. But, this is also having a negative effect slowing down the production of Nvidia. So, the company is looking for other options and most probably will get nodes from TSMC as well. It is better to keep more than one option in hand in a time of crisis.

Samsung or TSMC nodes 

Many rumors and reports have spread earlier regarding Nvidia’s preference for using Samsung or TSMC’s nodes. But, it was only a few months ago that Nvidia confirmed the use of the Samsung 8N node for building Ampere second-gen RTX cards. The company said that it is replacing the TSMC 7nm node which was used for the initial Ampere GA100 GPU. Jensen Huang, CEO of Nvidia then announced that for the majority of its GPUs they will be using TSMC tech and for a small subset of graphics silicon production they have chosen Samsung.

On 1st September 2020, in GeForce Special event, Jensen Huang announced that the company decided to use Samsung’s 8N process for all the large Ampere RTX GPUs. Many reports said that with smaller Samsung-built process nodes, the new generation of GeForce chips will offer a huge amount of rasterizing performance. But, with the shortage of production, Nvidia probably will switch to TSMC again.

Why the shortage is taking place?

According to some experts, the reason behind the shortage is difficulties faced in logistics and supply chain. A report by Tom’s Hardware explains that most of the cargo space in the distribution system is taken up by the pharmaceutical supplies which have caused the shortage of supplies for the electronic division. We are in the middle of a pandemic and most of the attention is going to the pharmaceutical needs and their fastest transportation. So, this can be a very important reason why the delay has been caused in yield and production rate. Moreover, the COVId-19 vaccines are also getting transferred to so many places across the world. And, currently, that is the prime concern of every nation.

The demand for the GPU has already exceeded the supply. And, you might have also noticed that the stocks are vanishing in no time. Even if someone is lucky to get hold of the RTX 3000 series they are receiving an unboxed version.

Will it get better soon?

Kress mentioned that the situation will improve within a couple of months but still it is difficult to quantify what is the exact portion of the demand they can meet. The company will update further on this situation by the end of this quarter. Till then we just have to sit tight and wait when these GPUs will be in stock again. The pandemic has affected every field in the industrial sector and we have to overcome it.

Nvidia Logo

Nvidia to Acquire ARM Holdings from Softbank at a $40 Million Valuation

The coronavirus pandemic has made this a precarious time for companies of all kinds. As a result, we have been seeing quite a few mergers and acquisitions in recent months. The newest addition to that group is the collaboration between Nvidia and Arm Holdings. Through a shock announcement on Sunday, Nvidia has agreed to buy smartphone chip-designer Arm Holdings, from Softbank at a valuation of $40 billion. Here’s a look at everything you need to know about this acquisition and the lead up to it.

Details Regarding the Deal

The companies announced via a report on Sunday that they were near closing the acquisition deal. The deal, which values Arm at $40 billion will include $21.5 billion via Nvidia stock, $12 billion in cash, with a $2 billion payable at the signing of the deal. In 2016, Softbank acquired Arm at a $31.4 billion valuation, making it one of the most expensive acquisitions ever. Arm Holdings is a known chip-design and manufacturing company that focuses on the design of chipsets for smartphones.  They are also involved in the making of Qualcomm and Apple chips, making them a popular choice within the industry. Furthermore, Apple has also expressed plans to shift its MAC computers from Intel to Arm chips, opening new avenues for the company.

Nvidia’s Plan for Arm

Nvidia is a giant in the field of AI and graphic card design and manufacture. They also have operations that help with the design and creation of self-driving vehicles and other autonomous applications. The company stated that it would retain Arm’s licensing model which is largely open in structure. Furthermore, Nvidia also said that they would continue to uphold Arm’s customer neutrality. Nvidia has been doing quite well for itself due to a boom in the usage and sales of video games due to the global pandemic. The company is aiming to launch a new desktop graphics card that will help PC gamers play more intensive games. As per the last earnings report, the company has projected a 46% growth in revenue for the third quarter of this year.

Nvidia Arm Logo
Image Source: nvidia.com

Past Acquisition

SoftBank’s acquisition of Arm was a result of them entering the Internet of Things space. The company viewed Arm Holdings as a valuable investment in this field due to its work on wireless connectivity and smart devices. Arm Holdings also works on intelligent chipsets that could possibly help with the development of everyday smart devices, such as refrigerators, other appliances, and cars. At the time of the acquisition, Masayoshi Son, who serves as the Chairman of Softbank, had hailed the move. He went as far as to say that Arm was a company he had always admired and that the acquisition means a lot to him.

Tough Time for Softbank

However, over the years, Softbank has had to deal with financial troubles. A couple of intensive investments, in companies like Uber and WeWork, has led to them losing money. Furthermore, the company’s shares lost value recently due to it holding stock in tech companies that had fared poorly in the market in September. While it is unclear precisely how much Softbank will make through the Nvidia acquisition, it might help them take a capital-intensive company off their hands. However, experts are concerned about whether Softbank will be able to make a lot of money on the sale due to it having invested quite a bit on Arm.

Softbank also requires cash to help the start-ups that it has picked up via the Vision Fund initiative. This move will provide some relief to start-ups that are facing hardships due to the lockdowns brought in by the coronavirus pandemic. Earlier this year, it had also stated that it would sell about $21 billion worth of stock it holds in T-Mobile. The acquisition will see Arm working as an Nvidia division. The company will stay headquartered in the UK and follow the same licensing model and customer base. However, the deal might still face intense scrutiny with regard to regulations from the EU.

Microsoft has also signed on-board, making Arm-based Surfaces and using it on their Windows. Also, the two companies are not competitors per se as Nvidia does not do much of CPU design or mobile hardware manufacturing. Nvidia might be planning an entry into the next stage of AI computing with this acquisition. Reports are stating that Nvidia wants to invest in building a brand-new AI center for research in Cambridge. If this proves to be true, then the companies can align with each other enabling both to push ahead with regards to AI software develop