Skip to content
This repository has been archived by the owner on Jun 5, 2023. It is now read-only.

Commit

Permalink
Merge pull request #69 from leekillough/cleanup_text
Browse files Browse the repository at this point in the history
Clean up text files
  • Loading branch information
Rmalavally authored Apr 3, 2020
2 parents ad24e83 + ae37ef6 commit 7c17272
Show file tree
Hide file tree
Showing 417 changed files with 7,359 additions and 7,578 deletions.
48 changes: 24 additions & 24 deletions Current_Release_Notes/Current-Release-Notes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ April 1st, 2020
What Is ROCm?
==============

ROCm is designed to be a universal platform for gpu-accelerated computing. This modular design allows hardware vendors to build drivers that support the ROCm framework. ROCm is also designed to integrate multiple programming languages and makes it easy to add support for other languages.
ROCm is designed to be a universal platform for gpu-accelerated computing. This modular design allows hardware vendors to build drivers that support the ROCm framework. ROCm is also designed to integrate multiple programming languages and makes it easy to add support for other languages.

Note: You can also clone the source code for individual ROCm components from the GitHub repositories.

Expand All @@ -20,13 +20,13 @@ ROCm Components
The following components for the ROCm platform are released and available for the v3.3
release:

Drivers
o Drivers

Tools
o Tools

Libraries
o Libraries

Source Code
o Source Code

You can access the latest supported version of drivers, tools, libraries, and source code for the ROCm platform at the following location:
https://github.com/RadeonOpenCompute/ROCm
Expand All @@ -44,7 +44,7 @@ The ROCm v3.3.x platform is designed to support the following operating systems:

* RHEL v7.7 (Using devtoolset-7 runtime support)

* SLES 15 SP1
* SLES 15 SP1


What\'s New in This Release
Expand All @@ -55,16 +55,16 @@ What\'s New in This Release

Users can install and access multiple versions of the ROCm toolkit simultaneously.

Previously, users could install only a single version of the ROCm toolkit.
Previously, users could install only a single version of the ROCm toolkit.

Now, users have the option to install multiple versions simultaneously and toggle to the desired version of the ROCm toolkit. From the v3.3 release, multiple versions of ROCm packages can be installed in the */opt/rocm-<version>* folder.

**Prerequisites**
###############################

Ensure the existing installations of ROCm, including */opt/rocm*, are completely removed before the v3.3 ROCm toolkit installation. The ROCm v3.3 package requires a clean installation.

* To install a single instance of ROCm, use the rocm-dkms or rocm-dev packages to install all the required components. This creates a symbolic link */opt/rocm* pointing to the corresponding version of ROCm installed on the system.
* To install a single instance of ROCm, use the rocm-dkms or rocm-dev packages to install all the required components. This creates a symbolic link */opt/rocm* pointing to the corresponding version of ROCm installed on the system.

* To install individual ROCm components, create the */opt/rocm* symbolic link pointing to the version of ROCm installed on the system. For example, *# ln -s /opt/rocm-3.3.0 /opt/rocm*

Expand All @@ -82,7 +82,7 @@ Review the following important notes:

To install a single instance of the ROCm package, access the non-versioned packages. You must not install any components from the multi-instance set.

For example,
For example,

* rocm-dkms

Expand All @@ -96,7 +96,7 @@ A fresh installation or an upgrade of the single-version installation will remov

**Multi Version Installation**

* To install a multi-instance of the ROCm package, access the versioned packages and components.
* To install a multi-instance of the ROCm package, access the versioned packages and components.

For example,

Expand All @@ -118,19 +118,19 @@ For example,

.. image:: /Current_Release_Notes/MultiIns.png

**IMPORTANT**: A single instance ROCm package cannot co-exist with the multi-instance package.
**IMPORTANT**: A single instance ROCm package cannot co-exist with the multi-instance package.

**NOTE**: The multi-instance installation applies only to ROCm v3.3 and above. This package requires a fresh installation after the complete removal of existing ROCm packages. The multi-version installation is not backward compatible.
**NOTE**: The multi-instance installation applies only to ROCm v3.3 and above. This package requires a fresh installation after the complete removal of existing ROCm packages. The multi-version installation is not backward compatible.


**GPU Process Information**
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A new functionality to display process information for GPUs is available in this release. For example, you can view the process details to determine if the GPU(s) must be reset.
A new functionality to display process information for GPUs is available in this release. For example, you can view the process details to determine if the GPU(s) must be reset.

To display the GPU process details, you can:

* Invoke the API
* Invoke the API

or

Expand All @@ -143,15 +143,15 @@ https://github.com/RadeonOpenCompute/rocm_smi_lib/blob/master/docs/ROCm_SMI_Manu
**Support for 3D Pooling Layers**
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

AMD ROCm is enhanced to include support for 3D pooling layers. The implementation of 3D pooling layers now allows users to run 3D convolutional networks, such as ResNext3D, on AMD Radeon Instinct GPUs.
AMD ROCm is enhanced to include support for 3D pooling layers. The implementation of 3D pooling layers now allows users to run 3D convolutional networks, such as ResNext3D, on AMD Radeon Instinct GPUs.


**ONNX Enhancements**
~~~~~~~~~~~~~~~~~~~~~~~~~

Open Neural Network eXchange (ONNX) is a widely-used neural net exchange format. The AMD model compiler & optimizer support the pre-trained models in ONNX, NNEF, & Caffe formats. Currently, ONNX versions 1.3 and below are supported.
Open Neural Network eXchange (ONNX) is a widely-used neural net exchange format. The AMD model compiler & optimizer support the pre-trained models in ONNX, NNEF, & Caffe formats. Currently, ONNX versions 1.3 and below are supported.

The AMD Neural Net Intermediate Representation (NNIR) is enhanced to handle the rapidly changing ONNX versions and its layers.
The AMD Neural Net Intermediate Representation (NNIR) is enhanced to handle the rapidly changing ONNX versions and its layers.

.. image:: /Current_Release_Notes/onnx.png

Expand All @@ -164,12 +164,12 @@ Code Object Manager (Comgr) Functions

The following Code Object Manager (Comgr) functions are deprecated.

* `amd_comgr_action_info_set_options`
* `amd_comgr_action_info_get_options`
* `amd_comgr_action_info_set_options`
* `amd_comgr_action_info_get_options`

These functions were originally deprecated in version 1.3 of the Comgr library as they no longer support options with embedded spaces.
These functions were originally deprecated in version 1.3 of the Comgr library as they no longer support options with embedded spaces.

The deprecated functions are now replaced with the array-oriented options API, which includes
The deprecated functions are now replaced with the array-oriented options API, which includes

* `amd_comgr_action_info_set_option_list`
* `amd_comgr_action_info_get_option_list_count`
Expand All @@ -179,9 +179,9 @@ The deprecated functions are now replaced with the array-oriented options API, w
Hardware and Software Support Information
==========================================

AMD ROCm is focused on using AMD GPUs to accelerate computational tasks such as machine learning, engineering workloads, and scientific computing. In order to focus our development efforts on these domains of interest, ROCm supports a targeted set of hardware configurations.
AMD ROCm is focused on using AMD GPUs to accelerate computational tasks such as machine learning, engineering workloads, and scientific computing. In order to focus our development efforts on these domains of interest, ROCm supports a targeted set of hardware configurations.

For more information, see
For more information, see

https://github.com/RadeonOpenCompute/ROCm

24 changes: 12 additions & 12 deletions Deep_learning/Deep-learning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,17 +13,17 @@ ROCm Tensorflow v1.14 Release
We are excited to announce the release of ROCm enabled TensorFlow v1.14 for AMD GPUs.
In this release we have the following features enabled on top of upstream TF1.14 enhancements:
* We integrated ROCm RCCL library for mGPU communication, details in `RCCL github repo <https://github.com/ROCmSoftwarePlatform/rccl>`_
* XLA backend is enabled for AMD GPUs, the functionality is complete, performance optimization is in progress.
* XLA backend is enabled for AMD GPUs, the functionality is complete, performance optimization is in progress.

ROCm Tensorflow v2.0.0-beta1 Release
*****************************
In addition to Tensorflow v1.14 release, we also enabled Tensorflow v2.0.0-beta1 for AMD GPUs. The TF-ROCm 2.0.0-beta1 release supports Tensorflow V2 API.
Both whl packages and docker containers are available below.
Both whl packages and docker containers are available below.

Tensorflow Installation
***********************

First, youll need to install the open-source ROCm 3.0 stack. Details can be found `here <https://github.com/RadeonOpenCompute/ROCm>`_
First, you'll need to install the open-source ROCm 3.0 stack. Details can be found `here <https://github.com/RadeonOpenCompute/ROCm>`_


Then, install these other relevant ROCm packages:
Expand All @@ -50,10 +50,10 @@ MIOpen

ROCm MIOpen v2.0.1 Release
*************************
Announcing our new Foundation for Deep Learning acceleration MIOpen 2.0 which introduces support for Convolution Neural Network (CNN) acceleration built to run on top of the ROCm software stack!
Announcing our new Foundation for Deep Learning acceleration MIOpen 2.0 which introduces support for Convolution Neural Network (CNN) acceleration -- built to run on top of the ROCm software stack!

This release includes the following:

* This release contains bug fixes and performance improvements.
* Additionally, the convolution algorithm Implicit GEMM is now enabled by default
* Known issues:
Expand Down Expand Up @@ -81,7 +81,7 @@ The `porting guide <https://github.com/dagamayank/ROCm.github.io/blob/master/doc

The ROCm 3.0 has prebuilt packages for MIOpen
***********************************************
Install the ROCm MIOpen implementation (assuming you already have the rocm and rocm-opencl-dev package installed):
Install the ROCm MIOpen implementation (assuming you already have the 'rocm' and 'rocm-opencl-dev" package installed):

MIOpen can be installed on Ubuntu using

Expand Down Expand Up @@ -210,7 +210,7 @@ Option 2: Install using PyTorch upstream docker file
3. Build PyTorch docker image:

::

cd pytorch/docker/caffe2/jenkins
./build.sh py2-clang7-rocmdeb-ubuntu16.04

Expand Down Expand Up @@ -292,7 +292,7 @@ Note: This will mount your host home directory on /data in the container.
5. Clone pytorch master (on to the host):

::

cd ~
git clone https://github.com/pytorch/pytorch.git or git clone https://github.com/ROCmSoftwarePlatform/pytorch.git
cd pytorch
Expand All @@ -315,7 +315,7 @@ export HCC_AMDGPU_TARGET=gfx906
then
::

USE_ROCM=1 MAX_JOBS=4 python setup.py install --user
USE_ROCM=1 MAX_JOBS=4 python setup.py install --user

UseMAX_JOBS=n to limit peak memory usage. If building fails try falling back to fewer jobs. 4 jobs assume available main memory of 16 GB or larger.

Expand Down Expand Up @@ -497,11 +497,11 @@ Tutorials
**hipCaffe**

* :ref:`caffe`

**MXNet**

* :ref:`mxnet`




Expand Down
Loading

0 comments on commit 7c17272

Please sign in to comment.