Nvidia

From Wikipedia, the free encyclopedia

Coordinates: 37°22′14.62″N 121°57′49.46″W / 37.3707278°N 121.9637389°W / 37.3707278; -121.9637389

Nvidia Corporation
TypePublic
Industry
  • Semiconductors
  • Artificial intelligence
  • Video games
  • Consumer electronics
  • Computer hardware
FoundedApril 5, 1993; 28 years ago (1993-04-05)
Founders
Headquarters
Santa Clara, California
,
U.S.
Area served
Worldwide
Key people
Products
RevenueDecrease US$10.92 billion (2020)[1]
Decrease US$2.85 billion (2020)[1]
Decrease US$2.8 billion (2020)[1]
Total assetsIncrease US$17.32 billion (2020)[1]
Total equityIncrease US$12.2 billion (2020)[1]
Number of employees
18,100 (October 2020)[1]
SubsidiariesNvidia Advanced Rendering Center
Mellanox Technologies
After proposed acquisition: Arm Ltd.
Websitewww.nvidia.com
www.developer.nvidia.com

Nvidia Corporation[note 1] (/ɛnˈvɪdiə/ en-VID-ee-ə) is an American multinational technology company incorporated in Delaware and based in Santa Clara, California.[2] It designs graphics processing units (GPUs) for the gaming and professional markets, as well as system on a chip units (SoCs) for the mobile computing and automotive market. Its primary GPU line, labeled "GeForce", is in direct competition with the GPUs of the "Radeon" brand by Advanced Micro Devices (AMD). Nvidia expanded its presence in the gaming industry with its handheld game consoles Shield Portable, Shield Tablet, and Shield Android TV and its cloud gaming service GeForce Now. Its professional line of GPUs are used in workstations for applications in such fields as architecture, engineering and construction, media and entertainment, automotive, scientific research, and manufacturing design.[3]

In addition to GPU manufacturing, Nvidia provides an application programming interface (API) called CUDA that allows the creation of massively parallel programs which utilize GPUs.[4][5] They are deployed in supercomputing sites around the world.[6][7] More recently, it has moved into the mobile computing market, where it produces Tegra mobile processors for smartphones and tablets as well as vehicle navigation and entertainment systems.[8][9][10] In addition to AMD, its competitors include Intel and Qualcomm.[11][12]

Nvidia announced plans on September 13, 2020 to acquire Arm Ltd. from SoftBank, pending regulatory approval, for a value of US$40 billion in stock and cash, which would be the largest semiconductor acquisition to date. SoftBank Group will acquire slightly less than a 10% stake in Nvidia, and Arm will maintain its headquarters in Cambridge.[13][14][15][16][17]

History[]

Aerial view of the new Nvidia headquarters building and surrounding campus and area in Santa Clara, California, in 2017. Apple Park is visible in the distance.

Nvidia was founded on April 5, 1993,[18][19][20] by Jensen Huang (CEO as of 2020), a Taiwanese American, previously director of CoreWare at LSI Logic and a microprocessor designer at Advanced Micro Devices (AMD), Chris Malachowsky, an electrical engineer who worked at Sun Microsystems, and Curtis Priem, previously a senior staff engineer and graphics chip designer at Sun Microsystems.[21][22]

In 1993, the three co-founders believed that the proper direction for the next wave of computing was accelerated or graphics-based computing because it could solve problems that general-purpose computing could not. They also observed that video games were simultaneously one of the most computationally challenging problems and would have incredibly high sales volume. Video games became the company's flywheel to reach large markets and funding huge R&D to solve massive computational problems. With only $40,000 in the bank, the company was born.[23] The company subsequently received $20 million of venture capital funding from Sequoia Capital and others.[24] Nvidia initially had no name and the co-founders named all their files NV, as in "next version". The need to incorporate the company prompted the co-founders to review all words with those two letters, leading them to "invidia", the Latin word for "envy".[23] Nvidia went public on January 22, 1999.[25][26][27]

Releases and acquisitions[]

The release of the RIVA TNT in 1998 solidified Nvidia's reputation for developing capable graphics adapters. In late 1999, Nvidia released the GeForce 256 (NV10), most notably introducing on-board transformation and lighting (T&L) to consumer-level 3D hardware. Running at 120 MHz and featuring four-pixel pipelines, it implemented advanced video acceleration, motion compensation, and hardware sub-picture alpha blending. The GeForce outperformed existing products by a wide margin.

Due to the success of its products, Nvidia won the contract to develop the graphics hardware for Microsoft's Xbox game console, which earned Nvidia a $200 million advance. However, the project took many of its best engineers away from other projects. In the short term this did not matter, and the GeForce2 GTS shipped in the summer of 2000. In December 2000, Nvidia reached an agreement to acquire the intellectual assets of its one-time rival 3dfx, a pioneer in consumer 3D graphics technology leading the field from mid 1990s until 2000.[28][29] The acquisition process was finalized in April 2002.[30]

In July 2002, Nvidia acquired Exluna for an undisclosed sum. Exluna made software-rendering tools and the personnel were merged into the Cg project.[31] In August 2003, Nvidia acquired MediaQ for approximately US$70 million.[32] On April 22, 2004, Nvidia acquired iReady, also a provider of high performance TCP/IP and iSCSI offload solutions.[33] In December 2004, it was announced that Nvidia would assist Sony with the design of the graphics processor (RSX) in the PlayStation 3 game console. On December 14, 2005, Nvidia acquired ULI Electronics, which at the time supplied third-party southbridge parts for chipsets to ATI, Nvidia's competitor.[34] In March 2006, Nvidia acquired Hybrid Graphics.[35] In December 2006, Nvidia, along with its main rival in the graphics industry AMD (which had acquired ATI), received subpoenas from the U.S. Department of Justice regarding possible antitrust violations in the graphics card industry.[36]

Forbes named Nvidia its Company of the Year for 2007, citing the accomplishments it made during the said period as well as during the previous five years.[37] On January 5, 2007, Nvidia announced that it had completed the acquisition of PortalPlayer, Inc.[38] In February 2008, Nvidia acquired Ageia, developer of the PhysX physics engine and physics processing unit. Nvidia announced that it planned to integrate the PhysX technology into its future GPU products.[39][40]

In July 2008, Nvidia took a write-down of approximately $200 million on its first-quarter revenue, after reporting that certain mobile chipsets and GPUs produced by the company had "abnormal failure rates" due to manufacturing defects. Nvidia, however, did not reveal the affected products. In September 2008, Nvidia became the subject of a class action lawsuit over the defects, claiming that the faulty GPUs had been incorporated into certain laptop models manufactured by Apple Inc., Dell, and HP. In September 2010, Nvidia reached a settlement, in which it would reimburse owners of the affected laptops for repairs or, in some cases, replacement.[41][42] On January 10, 2011, Nvidia signed a six-year, $1.5 billion cross-licensing agreement with Intel, ending all litigation between the two companies.[43]

In November 2011, after initially unveiling it at Mobile World Congress, Nvidia released its Tegra 3 ARM system-on-a-chip for mobile devices. Nvidia claimed that the chip featured the first-ever quad-core mobile CPU.[44][45] In May 2011, it was announced that Nvidia had agreed to acquire Icera, a baseband chip making company in the UK, for $367 million.[46] In January 2013, Nvidia unveiled the Tegra 4, as well as the Nvidia Shield, an Android-based handheld game console powered by the new system-on-chip.[47] On July 29, 2013, Nvidia announced that they acquired PGI from STMicroelectronics.[48]

Since 2014, Nvidia has diversified its business focusing on three markets: gaming, automotive electronics, and mobile devices.[49]

On May 6, 2016, Nvidia unveiled the first GPUs of the GeForce 10 series, the GTX 1080 and 1070, based on the company's new Pascal microarchitecture. Nvidia claimed that both models outperformed its Maxwell-based Titan X model; the models incorporate GDDR5X and GDDR5 memory respectively, and use a 16 nm manufacturing process. The architecture also supports a new hardware feature known as simultaneous multi-projection (SMP), which is designed to improve the quality of multi-monitor and virtual reality rendering.[50][51][52] Laptops that include these GPUs and are sufficiently thin – as of late 2017, under 0.8 inches (20 mm) – have been designated as meeting Nvidia's "Max-Q" design standard.[53]

In July 2016, Nvidia agreed to a settlement for a false advertising lawsuit regarding its GTX 970 model, as the models were unable to use all of their advertised 4 GB of RAM due to limitations brought by the design of its hardware.[54] In May 2017, Nvidia announced a partnership with Toyota which will use Nvidia's Drive PX-series artificial intelligence platform for its autonomous vehicles.[55] In July 2017, Nvidia and Chinese search giant Baidu announced a far-reaching AI partnership that includes cloud computing, autonomous driving, consumer devices, and Baidu's open-source AI framework PaddlePaddle. Baidu unveiled that Nvidia's Drive PX 2 AI will be the foundation of its autonomous-vehicle platform.[56]

Nvidia officially released the Titan V on December 7, 2017.[57][58]

Nvidia officially released the Nvidia Quadro GV100 on March 27, 2018.[59] Nvidia officially released the RTX 2080 GPUs in September 27, 2018. In 2018, Google announced that Nvidia's Tesla P4 graphic cards would be integrated into Google Cloud service's artificial intelligence.[60]

In May 2018, on the Nvidia user forum, a thread was started[61] asking the company to update users when they would release web drivers for its cards installed on legacy Mac Pro machines up to mid-2012 5,1 running the macOS Mojave operating system 10.14. Web drivers are required to enable graphics acceleration and multiple display monitor capabilities of the GPU. On its Mojave update info website, Apple stated that macOS Mojave would run on legacy machines with 'Metal compatible' graphics cards[62] and listed Metal compatible GPUs, including some manufactured by Nvidia.[63] However, this list did not include Metal compatible cards that currently work in macOS High Sierra using Nvidia developed web drivers. In September, Nvidia responded, "Apple fully control drivers for Mac OS. But if Apple allows, our engineers are ready and eager to help Apple deliver great drivers for Mac OS 10.14 (Mojave)."[64] In October, Nvidia followed this up with another public announcement, "Apple fully controls drivers for Mac OS. Unfortunately, Nvidia currently cannot release a driver unless it is approved by Apple,"[65] suggesting a possible rift between the two companies.[66] By January 2019, with still no sign of the enabling web drivers, Apple Insider weighed into the controversy with a claim that Apple management "doesn't want Nvidia support in macOS".[67] The following month, Apple Insider followed this up with another claim that Nvidia support was abandoned because of "relational issues in the past",[68] and that Apple was developing its own GPU technology.[69] Without Apple approved Nvidia web drivers, Apple users are faced with replacing their Nvidia cards with a competing supported brand, such as AMD Radeon from the list recommended by Apple.[70]

On March 11, 2019, Nvidia announced a deal to buy Mellanox Technologies for $6.9 billion[71] to substantially expand its footprint in the high-performance computing market. In May 2019, Nvidia announced new RTX Studio laptops. The creators say that the new laptop is going to be seven times faster than a top-end MacBook Pro with a Core i9 and AMD's Radeon Pro Vega 20 graphics in apps like Maya and RedCine-X Pro.[72] In August 2019, Nvidia announced Minecraft RTX, an official Nvidia-developed patch for the game Minecraft adding real-time DXR raytracing exclusively to the Windows 10 version of the game. The whole game is, in Nvidia's words, "refit" with path tracing, which dramatically affects the way light, reflections, and shadows work inside the engine.[73]

In May 2020, Nvidia's top scientists developed an open-source ventilator in order to address the shortage resulting from the global coronavirus pandemic.[74] On May 14, 2020, Nvidia officially announced their Ampere GPU microarchitecture and the Nvidia A100 GPU accelerator.[75][76] In July 2020, it was reported that Nvidia was in talks with SoftBank to buy Arm, a UK-based chip designer, for $32 billion.[77]

On September 1, 2020, Nvidia officially announced the GeForce 30 series based on the company's new Ampere microarchitecture.[78][79]

On September 13, 2020, it was announced that Nvidia would buy Arm Holdings from SoftBank Group for $40 billion, subject to the usual scrutiny, with the latter retaining a 10% share of Nvidia.[16][15][80][81]

In October 2020, Nvidia announced its plan to build the most powerful computer in Cambridge, England. Named Cambridge-1, the computer will employ AI to support healthcare research, with an expected completion by the end of 2020, at a cost of approximately £40 million. According to Jensen Huang, "The Cambridge-1 supercomputer will serve as a hub of innovation for the UK, and further the groundbreaking work being done by the nation’s researchers in critical healthcare and drug discovery."[82]

Also in October 2020, along with the release of Nvidia RTX A6000, Nvidia announced it is retiring its workstation GPU brand Quadro, shifting product name to Nvidia RTX for future products and the manufacturing to be Nvidia Ampere architecture based.[83]

In August 2021, the proposed takeover of Arm Holdings was stalled after the UK's Competition and Markets Authority raised "significant competition concerns".[84]

Maxwell advertising dispute[]

GTX 970 hardware specifications[]

Issues with the GeForce GTX 970's specifications were first brought up by users when they found out that the cards, while featuring 4 GB of memory, rarely accessed memory over the 3.5 GB boundary. Further testing and investigation eventually led to Nvidia issuing a statement that the card's initially announced specifications had been altered without notice before the card was made commercially available, and that the card took a performance hit once memory over the 3.5 GB limit were put into use.[85][86][87]

The card's back-end hardware specifications, initially announced as being identical to those of the GeForce GTX 980, differed in the amount of L2 cache (1.75 MB versus 2 MB in the GeForce GTX 980) and the number of ROPs (56 versus 64 in the 980). Additionally, it was revealed that the card was designed to access its memory as a 3.5 GB section, plus a 0.5 GB one, access to the latter being 7 times slower than the first one.[88] The company then went on to promise a specific driver modification in order to alleviate the performance issues produced by the cutbacks suffered by the card.[89] However, Nvidia later clarified that the promise had been a miscommunication and there would be no specific driver update for the GTX 970.[90] Nvidia claimed that it would assist customers who wanted refunds in obtaining them.[91] On February 26, 2015, Nvidia CEO Jen-Hsun Huang went on record in Nvidia's official blog to apologize for the incident.[92] In February 2015 a class-action lawsuit alleging false advertising was filed against Nvidia and Gigabyte Technology in the U.S. District Court for Northern California.[93][94]

Nvidia revealed that it is able to disable individual units, each containing 256KB of L2 cache and 8 ROPs, without disabling whole memory controllers.[95] This comes at the cost of dividing the memory bus into high speed and low speed segments that cannot be accessed at the same time unless one segment is reading while the other segment is writing because the L2/ROP unit managing both of the GDDR5 controllers shares the read return channel and the write data bus between the two GDDR5 controllers and itself.[95] This is used in the GeForce GTX 970, which therefore can be described as having 3.5 GB in its high speed segment on a 224-bit bus and 0.5 GB in a low speed segment on a 32-bit bus.[95]

On July 27, 2016, Nvidia agreed to a preliminary settlement of the U.S. class action lawsuit,[93] offering a $30 refund on GTX 970 purchases. The agreed upon refund represents the portion of the cost of the storage and performance capabilities the consumers assumed they were obtaining when they purchased the card.[96]

Async compute support[]

While the Maxwell series was marketed as fully DirectX 12 compliant,[97][98] Oxide Games, developer of Ashes of the Singularity, uncovered that Maxwell-based cards do not perform well when async compute is utilized.[99][100][101][97]

It appears that while this core feature is in fact exposed by the driver,[102] Nvidia partially implemented it through a driver-based shim, coming at a high performance cost.[101] Unlike AMD's competing GCN-based graphics cards which include a full implementation of hardware-based asynchronous compute,[103][104] Nvidia planned to rely on the driver to implement a software queue and a software distributor to forward asynchronous tasks to the hardware schedulers, capable of distributing the workload to the correct units.[105] Asynchronous compute on Maxwell therefore requires that both a game and the GPU driver be specifically coded for asynchronous compute on Maxwell in order to enable this capability.[106] The 3DMark Time Spy benchmark shows no noticeable performance difference between asynchronous compute being enabled or disabled.[106] Asynchronous compute is disabled by the driver for Maxwell.[106]

Oxide claims that this led to Nvidia pressuring them not to include the asynchronous compute feature in their benchmark at all, so that the 900 series would not be at a disadvantage against AMD's products which implement asynchronous compute in hardware.[100]

Maxwell requires that the GPU be statically partitioned for asynchronous compute to allow tasks to run concurrently.[107] Each partition is assigned to a hardware queue. If any of the queues that are assigned to a partition empty out or are unable to submit work for any reason (e.g. a task in the queue must be delayed until a hazard is resolved), the partition and all of the resources in that partition reserved for that queue will idle.[107] Asynchronous compute therefore could easily hurt performance on Maxwell if it is not coded to work with Maxwell's static scheduler.[107] Furthermore, graphics tasks saturate Nvidia GPUs much more easily than they do to AMD's GCN-based GPUs which are much more heavily weighted towards compute, so Nvidia GPUs have fewer scheduling holes that could be filled by asynchronous compute than AMD's.[107] For these reasons, the driver forces a Maxwell GPU to place all tasks into one queue and execute each task in serial, and give each task the undivided resources of the GPU no matter whether or not each task can saturate the GPU or not.[107]

Finances[]

For the fiscal year 2020, Nvidia reported earnings of US$2.796 billion, with an annual revenue of US$10.918 billion, a decline of 6.8% over the previous fiscal cycle. Nvidia's shares traded at over $531 per share, and its market capitalization was valued at over US$328.7 billion in January 2021.[108]

For the Q2 of 2020, Nvidia reported sales of $3.87 billion, which was a 50% rise from the same period in 2019. The surge in sales and people's higher demand for computer technology. According to the financial chief of the company, Colette Kress, the effects of the pandemic will "likely reflect this evolution in enterprise workforce trends with a greater focus on technologies, such as Nvidia laptops and virtual workstations, that enable remote work and virtual collaboration."[109]

Year[108] Revenue
in mil. US$
Net income
in mil. US$
Total assets
in mil. US$
Price per share
in US$
Employees
2005 2,010 89 1,629 8.81 2,101
2006 2,376 301 1,955 16.76 2,737
2007 3,069 449 2,675 25.68 4,083
2008 4,098 798 3,748 14.77 4,985
2009 3,425 −30 3,351 10.97 3,772
2010 3,326 −68 3,586 12.56 5,706
2011 3,543 253 4,495 15.63 6,029
2012 3,998 581 5,553 12.52 5,042
2013 4,280 563 6,412 13.38 7,974
2014 4,130 440 7,251 17.83 6,384
2015 4,682 631 7,201 23.71 6,384
2016 5,010 614 7,370 53.76 9,227
2017 6,910 1,666 9,841 149.79 10,299
2018 9,714 3,047 11,241 232.38 11,528
2019 11,716 4,141 13,292 174.59 13,277
2020 10,918 2,796 17,315 395.63 13,775

GPU Technology Conference[]

Nvidia's GPU Technology Conference (GTC) is a series of technical conferences held around the world.[110] It originated in 2009 in San Jose, California, with an initial focus on the potential for solving computing challenges through GPUs.[111] In recent years, the conference focus has shifted to various applications of artificial intelligence and deep learning, including: self-driving cars, healthcare, high performance computing, and Nvidia Deep Learning Institute (DLI) training.[112] GTC 2018 attracted over 8400 attendees.[110] GTC 2020 was converted to a digital event and drew roughly 59,000 registrants.[113]

The 2021 GTC keynote, which was streamed on YouTube on April 12th, included a portion that was made with CGI using the Nvidia Omniverse real-time rendering platform. Due to the photorealism of the event, including a model of CEO Jensen Huang, news outlets reported not being able to discern that a portion of the keynote was CGI until later revealed in a blog post on August 11th.[114][115]

Product families[]

A Shield Tablet with its accompanying input pen (left) and game controller

Nvidia's family includes graphics, wireless communication, PC processors, and automotive hardware/software.

Some families are listed below:

  • GeForce, consumer-oriented graphics processing products
  • Nvidia RTX, professional visual computing graphics processing products (replacing Quadro)
  • NVS, multi-display business graphics solution
  • Tegra, a system on a chip series for mobile devices
  • Tesla, dedicated general-purpose GPU for high-end image generation applications in professional and scientific fields
  • nForce, a motherboard chipset created by Nvidia for Intel (Celeron, Pentium and Core 2) and AMD (Athlon and Duron) microprocessors
  • Nvidia GRID, a set of hardware and services by Nvidia for graphics virtualization
  • Nvidia Shield, a range of gaming hardware including the Shield Portable, Shield Tablet and, most recently, the Shield Android TV
  • Nvidia Drive automotive solutions, a range of hardware and software products for assisting car drivers. The Drive PX-series is a high performance computer platform aimed at autonomous driving through deep learning,[116] while Driveworks is an operating system for driverless cars.[117]
  • BlueField, a range of Data Processing Units, initially inherited from their acquisition of Mellanox Technologies[118][119]
  • Nvidia Datacenter/Server class CPU, codenamed Nvidia Grace, coming in 2023[120][121]

Open-source software support[]

Until September 23, 2013, Nvidia had not published any documentation for its advanced hardware,[122] meaning that programmers could not write free and open-source device driver for its products without resorting to (clean room) reverse engineering.

Instead, Nvidia provides its own binary GeForce graphics drivers for X.Org and an open-source library that interfaces with the Linux, FreeBSD or Solaris kernels and the proprietary graphics software. Nvidia also provided but stopped supporting an obfuscated open-source driver that only supports two-dimensional hardware acceleration and ships with the X.Org distribution.[123]

The proprietary nature of Nvidia's drivers has generated dissatisfaction within free-software communities.[124] Some Linux and BSD users insist on using only open-source drivers and regard Nvidia's insistence on providing nothing more than a binary-only driver as inadequate, given that competing manufacturers such as Intel offer support and documentation for open-source developers and that others (like AMD) release partial documentation and provide some active development.[125][126]

Because of the closed nature of the drivers, Nvidia video cards cannot deliver adequate features on some platforms and architectures given that the company only provides x86/x64 and ARMv7-A driver builds.[127] As a result, support for 3D graphics acceleration in Linux on PowerPC does not exist, nor does support for Linux on the hypervisor-restricted PlayStation 3 console.

Some users claim that Nvidia's Linux drivers impose artificial restrictions, like limiting the number of monitors that can be used at the same time, but the company has not commented on these accusations.[128]

In 2014, with Maxwell GPUs, Nvidia started to require firmware by them to unlock all features of its graphics cards. Up to now, this state has not changed and makes writing open-source drivers difficult.[129][130][131]

Deep learning[]

Nvidia GPUs are used in deep learning, and accelerated analytics due to Nvidia's API CUDA which allows programmers to utilize the higher number of cores present in GPUs to parallelize BLAS operations which are extensively used in machine learning algorithms.[132] They were included in many Tesla vehicles before Elon Musk announced at Tesla Autonomy Day in 2019 that the company developed its own SoC and Full Self-Driving computer now and would stop using Nvidia hardware for their vehicles.[133][134] These GPUs are used by researchers, laboratories, tech companies and enterprise companies.[135] In 2009, Nvidia was involved in what was called the "big bang" of deep learning, "as deep-learning neural networks were combined with Nvidia graphics processing units (GPUs)".[136] That year, the Google Brain used Nvidia GPUs to create Deep Neural Networks capable of machine learning, where Andrew Ng determined that GPUs could increase the speed of deep-learning systems by about 100 times.[137]

DGX[]

DGX is a line of supercomputers by Nvidia.

In April 2016, Nvidia produced the DGX-1 based on an 8 GPU cluster, to improve the ability of users to use deep learning by combining GPUs with integrated deep learning software.[138] It also developed Nvidia Tesla K80 and P100 GPU-based virtual machines, which are available through Google Cloud, which Google installed in November 2016.[139] Microsoft added GPU servers in a preview offering of its N series based on Nvidia's Tesla K80s, each containing 4992 processing cores. Later that year, AWS's P2 instance was produced using up to 16 Nvidia Tesla K80 GPUs. That month Nvidia also partnered with IBM to create a software kit that boosts the AI capabilities of Watson,[140] called IBM PowerAI.[141][142] Nvidia also offers its own NVIDIA Deep Learning software development kit.[143] In 2017, the GPUs were also brought online at the Riken Center for Advanced Intelligence Project for Fujitsu.[144] The company's deep learning technology led to a boost in its 2017 earnings.[145]

In May 2018, researchers at the artificial intelligence department of Nvidia realized the possibility that a robot can learn to perform a job simply by observing the person doing the same job. They have created a system that, after a short revision and testing, can already be used to control the universal robots of the next generation. In addition to GPU manufacturing, Nvidia provides parallel processing capabilities to researchers and scientists that allow them to efficiently run high-performance applications.[146]

Inception Program[]

Nvidia's Inception Program was created to support startups making exceptional advances in the fields of artificial intelligence and data science. Award winners are announced at Nvidia's GTC Conference. There are currently 2,800 startups in the Inception Program.[147][148]

2018 winners[]

  • Subtle Medical (healthcare)
  • AiFi (enterprise)
  • Kinema Systems (autonomous vehicles)

2017 winners[]

  • Genetesis (social innovation)
  • Athelas (hottest emerging)
  • Deep Instinct (most disruptive)

GeForce Partner Program[]

The Nvidia GeForce Partner Program was a marketing program designed to provide partnering companies with benefits such as public relations support, video game bundling, and marketing development funds.[149] The program proved to be controversial, with complaints about it possibly being an anti-competitive practice.[150]

First announced in a blog post on March 1, 2018,[151] it was canceled in May 2018.[152]

Hardware Unboxed controversy[]

On December 10, 2020, Nvidia told popular YouTube tech reviewer Steven Walton of Hardware Unboxed that it would no longer supply him with GeForce Founders Edition graphics card review units.[153][154] In a Twitter message, Hardware Unboxed said, "Nvidia have officially decided to ban us from receiving GeForce Founders Edition GPU review samples. Their reasoning is that we are focusing on rasterization instead of ray tracing. They have said they will revisit this 'should your editorial direction change.'"[155]

In emails that were disclosed by Walton from Nvidia Senior PR Manager Bryan Del Rizzo, Nvidia had said:

...your GPU reviews and recommendations have continued to focus singularly on rasterization performance, and you have largely discounted all of the other technologies we offer gamers. It is very clear from your community commentary that you do not see things the same way that we, gamers, and the rest of the industry do.[156]

TechSpot, partner site of Hardware Unboxed, said, "this and other related incidents raise serious questions around journalistic independence and what they are expecting of reviewers when they are sent products for an unbiased opinion."[156]

A number of prominent technology reviewers came out strongly against Nvidia's move.[157][158] Linus Sebastian, of Linus Tech Tips, titled the episode of his popular weekly WAN Show, "NVIDIA might ACTUALLY be EVIL..."[159] and was highly critical of the company's move to dictate specific outcomes of technology reviews.[160] The popular review site Gamers Nexus said it was, "Nvidia's latest decision to shoot both its feet: They've now made it so that any reviewers covering RT will become subject to scrutiny from untrusting viewers who will suspect subversion by the company. Shortsighted self-own from NVIDIA."[161]

Two days later, Nvidia reversed their stance.[162][163] Hardware Unboxed sent out a Twitter message, "I just received an email from Nvidia apologizing for the previous email & they've now walked everything back."[164][157] On December 14, Hardware Unboxed released a video explaining the controversy from their viewpoint.[165] Via Twitter, they also shared a second apology sent by Nvidia's Del Rizzo that said "to withhold samples because I didn't agree with your commentary is simply inexcusable and crossed the line."[166][167]

See also[]

Notes[]

  1. ^ Officially written as NVIDIA and stylized in its logo as nVIDIA with the lowercase "n" the same height as the uppercase "VIDIA"; formerly stylized as nVIDIA with a large italicized lowercase "n" on products from the mid 1990s to early-mid 2000s. "NVIDIA Logo Guidelines at a Glance" (PDF). nvidia.com. Nvidia. Retrieved March 21, 2018.

References[]

  1. ^ Jump up to: a b c d e f "NVIDIA Annual Reports 2020" (PDF). nvidianews.nvidia.com. Nvidia. December 2020.
  2. ^ "NVIDIA Corporation – Investor Resources – FAQs". investor.nvidia.com.
  3. ^ Smith, Ryan. "Quadro No More? NVIDIA Announces Ampere-based RTX A6000 & A40 Video Cards For Pro Visualization". www.anandtech.com. Retrieved March 10, 2021.
  4. ^ "NVIDIA Doesn't Want Cryptocurrency Miners to Buy Its Gaming GPUs". MSN. Retrieved April 5, 2021.
  5. ^ Kirk, David; Hwu, Wen-Mei (2017). Programming Massively Parallel Processors (Third ed.). Elsevier. p. 345. ISBN 978-0-12-811986-0.
  6. ^ Clark, Don (August 4, 2011). "J.P. Morgan Shows Benefits from Chip Change". WSJ Digits Blog. Retrieved September 14, 2011.
  7. ^ "Top508 Supercomputing Sites". Top500. Retrieved September 14, 2011.
  8. ^ Burns, Chris (August 3, 2011). "2011 The Year of Nvidia dominating Android Superphones and tablets". SlashGear. Retrieved September 14, 2011.
  9. ^ "Tegra Super Tablets". Nvidia. Retrieved September 14, 2011.
  10. ^ "Tegra Super Phones". Nvidia. Retrieved September 14, 2011.
  11. ^ Jennewine, Trevor (January 15, 2021). "Why Intel's Competitive Edge Is Crumbling". The Motley Fool. Retrieved April 16, 2021.
  12. ^ Neiger, Chris (January 26, 2021). "Better Buy: NVIDIA vs. Qualcomm". The Motley Fool. Retrieved April 16, 2021.
  13. ^ "NVIDIA to Acquire Arm for $40 Billion, Creating World's Premier Computing Company for the Age of AI". NVIDIA (Press release). Retrieved December 2, 2020.
  14. ^ "NVIDIA to Acquire Arm for $40 Billion, Creating World's Premier Computing Company for the Age of AI". NVIDIA. September 13, 2020. Retrieved November 21, 2020.
  15. ^ Jump up to: a b Rosoff, Matt (September 13, 2020). "Nvidia to buy Arm Holdings from SoftBank for $40 billion". CNBC. Retrieved September 13, 2020.
  16. ^ Jump up to: a b Moorhead, Patrick. "It's Official- NVIDIA Acquires Arm For $40B To Create What Could Be A Computing Juggernaut". Forbes. Retrieved September 14, 2020.
  17. ^ Lyons, Kim (September 13, 2020). "Nvidia is acquiring Arm for $40 billion". The Verge. Retrieved September 15, 2020.
  18. ^ "Company Info". Nvidia.com. Retrieved November 9, 2010.
  19. ^ "Jensen Huang: Executive Profile & Biography". Bloomberg News. Retrieved June 21, 2018.
  20. ^ "Articles of Incorporation of Nvidia Corporation". California Secretary of State. Retrieved October 23, 2019 – via California Secretary of State Business Database.
  21. ^ "NVIDIA Company History: Innovations Over the Years". NVIDIA. Retrieved April 17, 2021.
  22. ^ "NVIDIA Corporation | History, Headquarters, & Facts". Encyclopedia Britannica. Retrieved April 17, 2021.
  23. ^ Jump up to: a b Nusca, Andrew (November 16, 2017). "This Man Is Leading an AI Revolution in Silicon Valley—And He's Just Getting Started". Fortune. Archived from the original on November 16, 2017. Retrieved November 28, 2017.
  24. ^ Williams, Elisa (April 15, 2002). "Crying wolf". Forbes. Retrieved February 11, 2017. Huang, a chip designer at AMD and LSI Logic, cofounded the company in 1993 with $20 million from Sequoia Capital and others.
  25. ^ Feinstein, Ken (January 22, 1999). "Nvidia Goes Public". gamecenter.co. Archived from the original on October 12, 1999. Retrieved July 13, 2019.
  26. ^ Takahashi, Dean (January 25, 1999). "Shares of Nvidia Surge 64% After Initial Public Offering". The Wall Street Journal. Retrieved July 13, 2019.
  27. ^ "NVIDIA Corporation Announces Initial Public Offering of 3,500,000 Shares of Common Stock". nvidia.com. January 22, 1999. Retrieved July 13, 2019.
  28. ^ Perez, Derek; Hara, Michael (December 15, 2000). "NVIDIA to Acquire 3dfx Core Graphics Assets" (Press release). Santa Clara, CA. Retrieved January 23, 2017.
  29. ^ Leupp, Alex; Sellers, Scott (December 15, 2000). "3dfx Announces Three Major Initiatives To Protect Creditors and Maximize Shareholder Value" (Press release). San Jose, CA. Archived from the original on February 5, 2001. Retrieved January 23, 2017. Board of Directors Initiates Cost-Cutting Measures, Recommends to Shareholders Sale of Company Assets to NVIDIA Corporation for $112 million and Dissolution of Company
  30. ^ Kanellos, Michael (April 11, 2002). "NNvidia buys out 3dfx graphics chip business". CNET. Retrieved January 23, 2017.
  31. ^ Becker, David. "Nvidia buys out Exluna". News.cnet.com. Retrieved November 9, 2010.
  32. ^ "NVIDIA Completes Purchase of MediaQ". Press Release. NVIDIA Corporation. August 21, 2003. Archived from the original on January 9, 2014. Retrieved August 21, 2016.
  33. ^ "NVIDIA Announces Acquisition of iReady". Press Release. NVIDIA Corporation. April 22, 2004. Retrieved August 21, 2016.
  34. ^ "NVIDIA to Acquire ULi Electronics, a Leading Developer of Core Logic Technology". Press Release. NVIDIA Corporation. December 14, 2005. Retrieved August 21, 2016.
  35. ^ Smith, Tony (March 22, 2006). "Nvidia acquires Hybrid Graphics – Middleware purchase". Hardware. The Register. Archived from the original on January 16, 2013. Retrieved August 21, 2016.
  36. ^ Krazit, Tom; McCarthy, Caroline (December 1, 2006). "Justice Dept. subpoenas AMD, Nvidia". New York Times. Archived from the original on December 8, 2006.
  37. ^ Brian Caulfield (January 7, 2008). "Shoot to Kill". Forbes. Retrieved December 26, 2007.
  38. ^ "Nvidia acquires PortalPlayer". Press Release. NVIDIA Corporation. January 5, 2007. Retrieved August 21, 2016.
  39. ^ "Nvidia to acquire Ageia for the PhysX chip". CNET. Retrieved May 26, 2017.
  40. ^ "Did NVIDIA cripple its CPU gaming physics library to spite Intel?". Ars Technica. July 9, 2010. Retrieved May 26, 2017.
  41. ^ "Nvidia GPU Class-Action Settlement Offers Repairs, New Laptops". PC Magazine. Retrieved May 26, 2017.
  42. ^ "Update: Nvidia Says Older Mobile GPUs, Chipsets Failing". ExtremeTech. Retrieved May 26, 2017.
  43. ^ "Intel agrees to pay NVIDIA $1.5b in patent license fees, signs cross-license". Engadget. Retrieved May 26, 2017.
  44. ^ "Nvidia Tegra 3: what you need to know". Techradar. November 9, 2011. Retrieved May 26, 2017.
  45. ^ "Nvidia Quad Core Mobile Processors Coming in August". PC World. Retrieved February 15, 2011.
  46. ^ "Cambridge coup as Icera goes to Nvidia for £225m". Business Weekly. May 9, 2011. Retrieved May 10, 2011.
  47. ^ "Nvidia announces Project Shield handheld gaming system with 5-inch multitouch display, available in Q2 of this year". The Verge. January 7, 2013. Retrieved May 26, 2017.
  48. ^ "NVIDIA Pushes Further into HPC With Portland Group Acquisition – insideHPC". insideHPC. July 29, 2013. Retrieved August 25, 2017.
  49. ^ Team, Trefis (December 31, 2014). "Nvidia's Performance In 2014: Factors That Are Driving Growth". Forbes. Retrieved October 7, 2020.
  50. ^ "Nvidia's new graphics cards are a big deal". The Verge. May 7, 2016. Retrieved May 26, 2017.
  51. ^ Mark Walton (May 7, 2016). "Nvidia's GTX 1080 and GTX 1070 revealed: Faster than Titan X at half the price". Ars Technica.
  52. ^ Joel Hruska (May 10, 2016). "Nvidia's Ansel, VR Funhouse apps will enhance screenshots, showcase company's VR technology". ExtremeTech.
  53. ^ Crider, Michael (October 5, 2017). "What Are NVIDIA MAX-Q Laptops?". How-To Geek. Retrieved December 18, 2017.
  54. ^ Smith, Ryan. "Update: NVIDIA GeForce GTX 970 Settlement Claims Website Now Open". Anandtech. Purch, Inc. Retrieved November 15, 2016.
  55. ^ Alexandria Sage (May 10, 2017). "Nvidia says Toyota will use its AI technology for self-driving cars". Reuters.
  56. ^ "Nvidia and Baidu join forces in far reaching AI partnership".
  57. ^ Newsroom, NVIDIA. "NVIDIA TITAN V Transforms the PC into AI Supercomputer". NVIDIA Newsroom Newsroom.
  58. ^ "Introducing NVIDIA TITAN V: The World's Most Powerful PC Graphics Card". NVIDIA.
  59. ^ News Archive | NVIDIA Newsroom
  60. ^ "Google Cloud gets support for Nvidia's Tesla P4 inferencing accelerators". Tech Crunch. Retrieved August 30, 2018.
  61. ^ May 10, 2018. 'When will the Nvidia Web Drivers be released for macOS Mojave 10.14'. Nvidia
  62. ^ Upgrade to macOS Mojave. Apple Computer
  63. ^ Install macOS 10.14 Mojave on Mac Pro (mid 2010) and Mac Pro (mid 2012). Apple Computer
  64. ^ September 28, 2018. CUDA 10 and macOS 10.14. Nvidia
  65. ^ October 18, 2018. FAQ about MacOS 10.14 (Mojave) NVIDIA drivers
  66. ^ Florian Maislinger. January 22, 2019. 'Apple and Nvidia are said to have a silent hostility'. PC Builders Club.
  67. ^ William Gallagher and Mike Wuerthele. January 18, 2019. 'Apple's management doesn't want Nvidia support in macOS, and that's a bad sign for the Mac Pro'
  68. ^ Vadim Yuryev. February 14, 2019. Video: Nvidia support was abandoned in macOS Mojave, and here's why. Apple Insider
  69. ^ Daniel Eran Dilger. April 4, 2017. 'Why Apple's new GPU efforts are a major disruptive threat to Nvidia'. Apple Insider
  70. ^ 'Install macOS 10.14 Mojave on Mac Pro (mid 2010) and Mac Pro (mid 2012)' Apple Inc.
  71. ^ "Nvidia to acquire Mellanox Technologies for about $7 billion in cash". www.cnbc.com. March 11, 2019. Retrieved March 11, 2019.
  72. ^ Byford, Sam (May 27, 2019). "Nvidia announces RTX Studio laptops aimed at creators". The Verge. Retrieved May 27, 2019.
  73. ^ "Minecraft with RTX: The World's Best Selling Videogame Is Adding Ray Tracing". www.nvidia.com. Retrieved April 25, 2020.
  74. ^ "NVIDIA's top scientist develops open-source ventilator that can be built with $400 in readily-available parts". TechCrunch. Retrieved May 1, 2020.
  75. ^ NVIDIA’s New Ampere Data Center GPU in Full Production | NVIDIA Newsroom
  76. ^ NVIDIA A100 | NVIDIA
  77. ^ Massoudi, Arash; Bradshaw, Tim; Fontanella-Khan, James (July 31, 2020). "Nvidia in talks to buy Arm from SoftBank for more than $32bn". Financial Times. Retrieved July 3, 2020.
  78. ^ "NVIDIA Delivers Greatest-Ever Generational Leap with GeForce RTX 30 Series GPUs" (Press release). NVIDIA. September 1, 2020. Retrieved August 5, 2021.
  79. ^ "GeForce RTX 30 Series Graphics Cards: The Ultimate Play". NVIDIA. September 1, 2020. Retrieved August 5, 2021.
  80. ^ Arash Massoudi; Robert Smith; James Fontanella-Khan (September 12, 2020). "SoftBank set to sell UK's Arm Holdings to Nvidia for $40bn". Financial Times. Retrieved September 12, 2020.
  81. ^ "NVIDIA to Acquire Arm for $40 Billion, Creating World's Premier Computing Company for the Age of AI". September 13, 2020. Retrieved September 13, 2020.
  82. ^ Sam Shead (October 5, 2020). "Nvidia pledges to build Britain's largest supercomputer following $40 billion bid for Arm". CNBC. Retrieved October 5, 2020.
  83. ^ Smith, Ryan. "Quadro No More? NVIDIA Announces Ampere-based RTX A6000 & A40 Video Cards For Pro Visualization". www.anandtech.com. Retrieved March 10, 2021.
  84. ^ Browne, Ryan (August 20, 2021). "Nvidia's $40 billion Arm takeover warrants an in-depth competition probe, UK regulator says". CNBC. Retrieved August 20, 2021.
  85. ^ "NVIDIA Discloses Full Memory Structure and Limitations of GTX 970". PCPer.
  86. ^ "GeForce GTX 970 Memory Issue Fully Explained – Nvidia's Response". WCFTech. January 24, 2015.
  87. ^ "Why Nvidia's GTX 970 slows down when using more than 3.5GB VRAM". PCGamer. January 26, 2015.
  88. ^ "GeForce GTX 970: Correcting The Specs & Exploring Memory Allocation". AnandTech.
  89. ^ "NVIDIA Working on New Driver For GeForce GTX 970 To Tune Memory Allocation Problems and Improve Performance". WCFTech. January 28, 2015.
  90. ^ "NVIDIA clarifies no driver update for GTX 970 specifically". PC World. January 29, 2015.
  91. ^ "NVIDIA Plans Driver Update for GTX 970 Memory Issue, Help with Returns". pcper.com.
  92. ^ "Nvidia CEO addresses GTX 970 controversy". PCGamer. February 26, 2015.
  93. ^ Jump up to: a b Chalk, Andy (February 22, 2015). "Nvidia faces false advertising lawsuit over GTX 970 specs". PC Gamer. Retrieved March 27, 2015.
  94. ^ Niccolai, James (February 20, 2015). "Nvidia hit with false advertising suit over GTX 970 performance". PC World. Retrieved March 27, 2015.
  95. ^ Jump up to: a b c Ryan Smith. "Diving Deeper: The Maxwell 2 Memory Crossbar & ROP Partitions - GeForce GTX 970: Correcting The Specs & Exploring Memory Allocation". anandtech.com.
  96. ^ "Nvidia settles class action lawsuit". Top Class Actions. July 27, 2016. Retrieved July 27, 2016.
  97. ^ Jump up to: a b http://international.download.nvidia.com/geforce-com/international/images/nvidia-geforce-gtx-980-ti/nvidia-geforce-gtx-980-ti-directx-12-advanced-api-support.png
  98. ^ "GeForce GTX 980 - Specifications - GeForce". geforce.com.
  99. ^ "DX12 GPU and CPU Performance Tested: Ashes of the Singularity Benchmark". pcper.com.
  100. ^ Jump up to: a b Hilbert Hagedoorn. "Nvidia Wanted Oxide dev DX12 benchmark to disable certain DX12 Features ? (content updated)". Guru3D.com.
  101. ^ Jump up to: a b "The Birth of a new API". Oxide Games. August 16, 2015.
  102. ^ "[Various] Ashes of the Singularity DX12 Benchmarks". Overclock.net. August 17, 2015.
  103. ^ "Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12". TechPowerUp.
  104. ^ Hilbert Hagedoorn. "AMD Radeon R9 Fury X review". Guru3D.com.
  105. ^ "[Various] Ashes of the Singularity DX12 Benchmarks". Overclock.net. August 17, 2015.
  106. ^ Jump up to: a b c Shrout, Ryan (July 14, 2016). "3DMark Time Spy: Looking at DX12 Asynchronous Compute Performance". PC Perspective. Archived from the original on July 15, 2016. Retrieved July 14, 2016.
  107. ^ Jump up to: a b c d e Smith, Ryan (July 20, 2016). "The NVIDIA GeForce GTX 1080 & GTX 1070 Founders Editions Review: Kicking Off the FinFET Generation". AnandTech. p. 9. Retrieved July 21, 2016.
  108. ^ Jump up to: a b "NVIDIA Corporation – Financial Info – Annual Reports and Proxies". investor.nvidia.com. Retrieved November 18, 2018.
  109. ^ Armental, Maria (August 19, 2020). "Nvidia Posts Record Sales as Pandemic Sustains Demand for Gaming, Data-Center Chips". The Wall Street Journal. Retrieved August 20, 2020.
  110. ^ Jump up to: a b "GPU Technology Conference". GPU Technology Conference. Retrieved June 13, 2018.
  111. ^ "Company History | NVIDIA". www.nvidia.com. Retrieved June 13, 2018.
  112. ^ "Deep Learning and GPU-Programming Workshops at GTC 2018". NVIDIA. Retrieved June 13, 2018.
  113. ^ "Deep Learning and GPU-Programming Workshops at GTC 2018". NVIDIA. March 2, 2020. Retrieved March 2, 2020.
  114. ^ August 2021, John Loeffler-Computing Staff Writer 12 (August 12, 2021). "Jensen's Kitchen was a lie: Nvidia reveals GTC 2021 keynote nearly 100% fake". TechRadar. Retrieved August 13, 2021.
  115. ^ Caulfield, Brian (August 11, 2021). "NVIDIA Omniverse Changes the Way Industries Collaborate | NVIDIA Blog". The Official NVIDIA Blog. Retrieved August 13, 2021.
  116. ^ "Nvidia automotive solutions". Nvidia. Retrieved March 29, 2016.
  117. ^ "Nvidia unveils driverless car OS and partnership with TomTom". September 29, 2016. Retrieved October 20, 2016.
  118. ^ "NVIDIA BlueField Data Processing Units". NVIDIA. Retrieved May 29, 2021.
  119. ^ Deierling, Kevin (May 21, 2020). "What Is a DPU? | NVIDIA Blog". The Official NVIDIA Blog. Retrieved May 29, 2021.
  120. ^ "NVIDIA Announces CPU for Giant AI and High Performance Computing Workloads" (Press release). NVIDIA. April 12, 2021. Retrieved August 5, 2021.
  121. ^ "NVIDIA Grace CPU". NVIDIA. Retrieved August 5, 2021.
  122. ^ "Nvidia Offers to Release Public Documentation on Certain Aspects of Their GPUs". September 23, 2013. Retrieved September 24, 2013.
  123. ^ "nv". Retrieved August 6, 2015.
  124. ^ "Linus Torvalds: 'Damn You, Nvidia' for not Supporting Linux". The Verge. June 17, 2012. Retrieved July 9, 2013.
  125. ^ "X.org, distributors, and proprietary modules". Linux Weekly News. Eklektix. August 14, 2006. Retrieved November 3, 2008.
  126. ^ An overview of graphic card manufacturers and how well they work with Ubuntu Ubuntu Gamer, January 10, 2011 (Article by Luke Benstead)
  127. ^ "Unix Drivers". Retrieved August 6, 2015.
  128. ^ Kevin Parrish (October 3, 2013). "Nvidia Removed Linux Driver Feature Due to Windows". Tom's Hardware. Retrieved August 6, 2015.
  129. ^ NVIDIA Begins Requiring Signed GPU Firmware Images, slashdot, 2014-09-27.
  130. ^ Linux-Firmware Adds Signed NVIDIA Firmware Binaries For Turing's Type-C Controller, phoronix, 2019-02-13.
  131. ^ The Open-Source NVIDIA "Nouveau" Driver Gets A Batch Of Fixes For Linux 5.3, phoronix, 2019-07-19.
  132. ^ Kirk, David; Hwu, Wen-Mei (2017). Programming Massively Parallel Processors (Third ed.). Elsevier. p. 345. ISBN 978-0-12-811986-0.
  133. ^ "Nvidia's Self-Driving Vehicle Approach — from Tesla to DHL to Mercedes". CleanTechnica. August 13, 2020. Retrieved October 28, 2020.
  134. ^ "Google Cloud adds NVIDIA Tesla K80 GPU support to boost deep learning performance – TechRepublic".
  135. ^ "Intel, Nvidia Trade Shots Over AI, Deep Learning". August 25, 2016.
  136. ^ "Nvidia CEO bets big on deep learning and VR". April 5, 2016.
  137. ^ "From not working to neural networking". The Economist. June 23, 2016.
  138. ^ Coldewey, Devin. "NVIDIA announces a supercomputer aimed at deep learning and AI".
  139. ^ Nichols, Shaun (February 21, 2017). "Google rents out Nvidia Tesla GPUs in its cloud. If you ask nicely, that'll be 70 cents an hour, bud". The Register.
  140. ^ "IBM, NVIDIA partner for 'fastest deep learning enterprise solution' in the world – TechRepublic".
  141. ^ "IBM and Nvidia team up to create deep learning hardware". November 14, 2016.
  142. ^ "IBM and Nvidia make deep learning easy for AI service creators with a new bundle". November 15, 2016.
  143. ^ "Facebook 'Big Basin' AI Compute Platform Adopts NVIDIA Tesla P100 For Next Gen Data Centers". Archived from the original on November 26, 2020. Retrieved April 19, 2017.
  144. ^ "Nvidia to Power Fujitsu's New Deep Learning System at RIKEN – insideHPC". March 5, 2017.
  145. ^ Tilley, Aaron (February 9, 2017). "Nvidia Beats Earnings Estimates As Its Artificial Intelligence Business Keeps On Booming". Forbes. Retrieved January 27, 2021.
  146. ^ "Robot see, robot do: Nvidia system lets robots learn by watching humans" New Atlas, May 23, 2018
  147. ^ Dean Takahashi (March 28, 2018). "Nvidia's Inception AI contest awards $1 million to 3 top startups". Venture Beat. Retrieved September 6, 2018.
  148. ^ "Six Startups Split $1.5 Million in Cash in AI startup competition". The Official NVIDIA Blog. May 10, 2017. Retrieved March 28, 2018.
  149. ^ "Nvidia gets anti-competitive with unsavory GeForce Partner Program".
  150. ^ Bennett, Kyle (March 8, 2018). "GeForce Partner Program Impacts Consumer Choice". HardOcP. Archived from the original on July 12, 2019.
  151. ^ "GeForce Partner Program Helps Gamers Know What They're Buying | NVIDIA Blog". The Official NVIDIA Blog. March 1, 2018. Retrieved April 10, 2018.
  152. ^ Killian, Zak (May 4, 2018). "Nvidia puts the kibosh on the GeForce Partner Program". Tech Report. Retrieved May 4, 2018.
  153. ^ Miller, Chris (December 12, 2020). "Nvidia Under Fire For Banning Review Site That Doesn't Focus On Nvidia Hardware Strengths | Happy Gamer". HappyGamer. Retrieved December 13, 2020.
  154. ^ "Hardware Unboxed VS NVIDIA: a Masterpiece of Bad Marketing". Pangoly. December 13, 2020. Retrieved December 17, 2020.
  155. ^ "Nvidia have officially decided to ban us from receiving GeForce Founders Edition GPU review samples". Twitter. Retrieved December 13, 2020.
  156. ^ Jump up to: a b Lal, Arjun Krishna; Franco, Julio (December 12, 2020). "The ugly side of Nvidia: A rollercoaster ride that shows when Big Tech doesn't get it". TechSpot. Retrieved December 13, 2020.
  157. ^ Jump up to: a b Alderson, Alex. "NVIDIA u-turns on its decision to block Hardware Unboxed from receiving GPU review units". Notebookcheck. Retrieved December 13, 2020.
  158. ^ "Nvidia puts pressure on the hardware press". HardwareHeaven.com. December 14, 2020. Retrieved December 15, 2020.
  159. ^ Kokhanyuk, Stanislav. "I believe in ray tracing, but I do not believe in Nvidia's RTX 3000-series GPUs". Notebookcheck. Retrieved December 13, 2020.
  160. ^ "NVIDIA might ACTUALLY be EVIL... - WAN Show December 11, 2020". YouTube. December 11, 2020.
  161. ^ "I have something else to say about NVIDIA's latest decision to shoot both its feet: They've now made it so that any reviewers covering RT will become subject to scrutiny from untrusting viewers who will suspect subversion by the company. Shortsighted self-own from NVIDIA". Twitter. December 11, 2020. Retrieved December 13, 2020.
  162. ^ Farrell, Nick (December 14, 2020). "Nvidia retreats from PR disaster". fudzilla.com. Retrieved December 14, 2020.
  163. ^ "Nvidia issues an Apology for Backlisting Hardware Unboxed as their Reviewer". Appuals.com. December 15, 2020. Retrieved December 16, 2020.
  164. ^ "I just received an email from Nvidia apologizing for the previous email & they've now walked everything back". Twitter. December 12, 2020. Retrieved December 13, 2020.
  165. ^ Hardware Unboxed. "Nvidia Bans Hardware Unboxed, Then Backpedals: Our Thoughts". YouTube. Retrieved December 14, 2020.
  166. ^ Hardware Unboxed (December 14, 2020). "Twitter: Bryan Del Rizzo from Nvidia has reached out a second time to apologize and asked us to share this with you".
  167. ^ Stanley, Donny (December 15, 2020). "NVIDIA Apologizes For Email Blacklisting Reviewer, Retracts Original Statements". AdoredTV. Retrieved December 22, 2020.

External links[]

Retrieved from ""