Skip to content

Commit

Permalink
[Update] GPU2 plan details (multiple topics) (#6964)
Browse files Browse the repository at this point in the history
* reflect new NVIDIA RTX 4000 Ada plans

* modified date

* minor edits

* pricing details in choosing plantopic

* updated modified date for guides calling updated shortguide

* added 'inferencing' to dictionary
  • Loading branch information
crystallearobertson authored May 22, 2024
1 parent e8f83db commit ad1f663
Show file tree
Hide file tree
Showing 8 changed files with 68 additions and 51 deletions.
1 change: 1 addition & 0 deletions ci/vale/dictionary.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1061,6 +1061,7 @@ Indri
inet
inet6
infector
inferencing
infographic
infosec
ingester
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,9 @@ show_on_rss_feed: false
| [**Dedicated CPU**](/docs/products/compute/compute-instances/plans/dedicated-cpu/) | Dedicated | **4 GB - 512 GB\* Memory, 2 - 64 vCPUs, 80 GB - 7200 GB Storage**<br>[Starting at $36/mo ($0.05/hour)](https://www.linode.com/pricing/#compute-dedicated)<br><br>Equipped with Dedicated CPUs, which provide competition-free guaranteed CPU resources. Perfectly balanced for most production applications.<br><br>*Best for production websites, high traffic databases, and any application that requires 100% sustained CPU usage.* |
| [**Premium**](/docs/products/compute/compute-instances/plans/premium/) | Dedicated | **4 GB - 512 GB\* Memory, 2 - 64 vCPUs, 80 GB - 7200 GB Storage**<br>[Starting at $43/mo ($0.06/hour)](https://www.linode.com/pricing/#compute-premium)<br><br>Provides the best available AMD EPYC™ CPUs on dedicated resources. Consistent performance for CPU-intensive workloads.<br><br>*Best for enterprise-grade, business-critical, and latency-sensitive applications.* |
| [**High Memory**](/docs/products/compute/compute-instances/plans/high-memory/) | Dedicated | **24 GB - 300 GB Memory, 2 - 16 vCPUs, 20 GB - 340 GB Storage**<br>[Starting at $60/mo ($0.09/hour)](https://www.linode.com/pricing/#compute-high-memory)<br><br>Optimized for memory-intensive applications and equipped with Dedicated CPUs, which provide competition free guaranteed CPU resources.<br><br>*Best for in-memory databases, in-memory caching systems, big data processing, and any production application that requires a large amount of memory while keeping costs down.* |
| [**GPU**](/docs/products/compute/compute-instances/plans/gpu/)<br>(limited availability) | Dedicated | **1 - 4 NVIDIA Quadro RTX cards, 24 - 96 GB Video Memory, 32 GB - 128 GB Memory, 8 - 24 vCPUs, 640 GB - 2560 GB Storage**<br>[Starting at $1000/mo ($1.50/hour)](https://www.linode.com/pricing/#compute-gpu)<br><br>The only instance type that's equipped with NVIDIA Quadro RTX 6000 GPUs (up to 4) for on demand execution of complex processing workloads. <br><br>*Best for applications that require massive amounts of parallel processing power, including machine learning, AI, graphics processing, and big data analysis.* |
| [**GPU - NVIDIA RTX 4000 Ada (Beta)**](/docs/products/compute/compute-instances/plans/gpu/) | Dedicated | **1 - 8 cards, 20 - 160 GB Video Memory, 64 GB - 512 GB Memory, 20 - 60 vCPUs, 1.5 TB - 25 TB Storage**<br>Starting at $600/mo ($0.83/hour)<br><br>The only Compute Instance type that's equipped with NVIDIA RTX 4000 Ada GPUs (up to 8) for on demand execution of complex processing workloads. <br><br>*Best for applications that require massive amounts of parallel processing power including machine learning, AI inferencing, graphics processing, and big data analysis.* |
| [**GPU - NVIDIA Quadro RTX 6000**](/docs/products/compute/compute-instances/plans/gpu/)<br>(limited deployment availability) | Dedicated | **1 - 4 cards, 24 - 96 GB Video Memory, 32 GB - 128 GB Memory, 8 - 24 vCPUs, 640 GB - 2560 GB Storage**<br>[Starting at $1000/mo ($1.50/hour)](https://www.linode.com/pricing/#compute-gpu)<br><br>The only Compute Instance type that's equipped with NVIDIA Quadro RTX 6000 GPUs (up to 4) for on demand execution of complex processing workloads. <br><br>*Best for applications that require massive amounts of parallel processing power, including machine learning, AI, graphics processing, and big data analysis.* |

\*512 GB Dedicated CPU and Premium plans are in limited availability.

See [Choosing a Compute Instance Type and Plan](/docs/products/compute/compute-instances/plans/choosing-a-plan/) for a full comparison.
See [Choosing a Compute Instance Type and Plan](/docs/products/compute/compute-instances/plans/choosing-a-plan/) to compare plans.
2 changes: 1 addition & 1 deletion docs/products/compute/compute-instances/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Compute Instances
title_meta: "Compute Instance Product Documentation"
description: "Host your workloads on Linode's secure and reliable cloud infrastructure using Compute Instances, versatile Linux-based virtual machines."
modified: 2023-09-21
modified: 2024-05-21
tab_group_main:
is_root: true
title: Overview
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: "Create a Compute Instance"
title_meta: "Create a Compute Instance on the Linode Platform"
description: "Learn how to create a new Compute Instance, including choosing a distribution, region, and plan size."
published: 2022-04-19
modified: 2024-02-13
modified: 2024-05-21
keywords: ["getting started", "deploy", "linode", "linux"]
aliases: ['/guides/creating-a-compute-instance/','/products/compute/dedicated-cpu/guides/deploy/']
---
Expand Down
2 changes: 1 addition & 1 deletion docs/products/compute/compute-instances/plans/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Plan Types
title_meta: "Compute Instance Plan Types"
description: "Quickly compare each Compute Instance plan type, including Shared CPU and Dedicated CPU plans"
published: 2023-01-12
published: 2024-05-21
tab_group_main:
weight: 15
---
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: "Choosing a Compute Instance Type and Plan"
title_meta: "How to Choose a Compute Instance Plan"
description: "Get help deciding which Compute Instance type is right for your use case and learn how to select the most appropriate plan"
published: 2019-02-04
modified: 2024-03-11
modified: 2024-05-21
linkTitle: "Choosing a Plan"
keywords: ["choose", "help", "plan", "size", "shared", "high memory", "dedicated", "dedicated CPU", "GPU instance"]
tags: ["linode platform"]
Expand Down Expand Up @@ -109,22 +109,20 @@ Starting at $60/mo ($0.09/hour). See [High Memory Pricing](https://www.linode.co

### GPU Instances

All GPU plans are in limited availability.
**32 GB - 512 GB Memory, 8 - 60 Dedicated vCPUs, 640 GB - 12 TB GB Storage**<br>
NVIDIA RTX 4000 Ada GPU plans (Beta) starting at $600/mo ($0.83/hour) with 1 GPU card, 20 vCPU cores, 64 GB of memory, and 1.5 TB of SSD storage. NVIDIA Quadro RTX 6000 plans starting at $1000/mo ($1.50/hr) with 1 GPU card, 8 vCPU cores, 32 GB of memory, and 640 GB of storage. For a full list of plans, resources, and pricing, see [Akamai Cloud Computing Pricing](https://www.linode.com/pricing/#compute-gpu).

**32 GB - 128 GB Memory, 8 - 24 Dedicated vCPUs, 640 GB - 2560 GB Storage**<br>
Starting at $1000/mo ($1.50/hour). See [GPU Pricing](https://www.linode.com/pricing/#compute-gpu) for a full list of plans, resources, and pricing.

[GPU Instances](/docs/products/compute/compute-instances/plans/gpu/) are the only instance type equipped with [NVIDIA Quadro RTX 6000 GPU cards](https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf) (up to 4) for on demand execution of complex processing workloads. These GPUs have CUDA cores, Tensor cores, and RT (Ray Tracing) cores. GPUs are designed to process large blocks of data in parallel, meaning that they are an excellent choice for any workload requiring thousands of simultaneous threads. With significantly more logical cores than a standard CPU, GPUs can perform computations that process large amounts of data in parallel more efficiently.
[GPU Instances](/docs/products/compute/compute-instances/plans/gpu/) are the only Compute Instance type equipped with [NVIDIA RTX 4000 Ada GPU cards](https://resources.nvidia.com/en-us-design-viz-stories-ep/rtx-4000-ada-datashe?lx=CCKW39&contentType=data-sheet) or [NVIDIA Quadro RTX 6000 GPU cards](https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf) for on demand execution of complex processing workloads. These GPUs have CUDA cores, Tensor cores, and RT (Ray Tracing) cores. GPUs are designed to process large blocks of data in parallel, meaning that they are an excellent choice for any workload requiring thousands of simultaneous threads. With significantly more logical cores than a standard CPU, GPUs can perform computations that process large amounts of data in parallel more efficiently.

**Recommended Use Cases:**

*Best for applications that require massive amounts of parallel processing power, including machine learning, AI, graphics processing, and big data analysis.*
*Best for applications that require massive amounts of parallel processing power, including machine learning, AI inferencing, graphics processing, and big data analysis.*

- [Machine Learning and AI](/docs/products/compute/compute-instances/plans/gpu/#machine-learning-and-ai)
- [Big Data](/docs/products/compute/compute-instances/plans/gpu/#big-data)
- [Video Encoding](/docs/products/compute/compute-instances/plans/gpu/#video-encoding)
- [General Purpose Computing Using NVIDIA's CUDA Toolkit](/docs/products/compute/compute-instances/plans/gpu/#general-purpose-computing-using-cuda)
- [Graphics Processing](/docs/products/compute/compute-instances/plans/gpu/#graphics-processing)
- [Video encoding](/docs/products/compute/compute-instances/plans/gpu/#video-encoding)
- [Graphics processing](/docs/products/compute/compute-instances/plans/gpu/#graphics-processing)
- [AI inferencing](/docs/products/compute/compute-instances/plans/gpu/#machine-learning-and-ai)
- [Big data analysis](/docs/products/compute/compute-instances/plans/gpu/#big-data)
- [General Purpose computing using NVIDIA's CUDA Toolkit](/docs/products/compute/compute-instances/plans/gpu/#general-purpose-computing-using-cuda)

## Compute Resources

Expand Down
Loading

0 comments on commit ad1f663

Please sign in to comment.