This lesson is still being designed and assembled (Pre-Alpha version)

JupyterLab

Overview

Teaching: 0 min
Exercises: 0 min
Questions
  • What is JupyterLab?

Objectives
  • Learn: number of GPU instances, selecting a Docker image and GPU memory

Users can deploy one or more private JupyterLab applications.

To encourage fair sharing these applications are time limited.

Selecting a number of GPU instances

Number of GPU instances

  • The AF cluster has four NVIDIA A100 GPUs.

  • GPU partitioned into -> seven GPU instances.

  • AF cluster can have up to 28 GPU instances running in parallel.

  • You can select anywhere from 0 to 7 GPU instances as resources for the notebook.

Selecting Docker image

2 image options

  • 1: full anaconda (ivukotic/ml_platform:conda)

  • 2: NVidia GPU and ROOT support (ivukotic/ml_platform:latest)

    • This one has ML packages (Tensorflow, Keras, ScikitLearn,…) preinstalled, check /ML_platform_tests tutorial

For software additions and upgrades please contact ivukotic@uchicago.edu

Selecting GPU memory

Select 40,836 MB for an entire A100 GPU. Select 4864 MB for a MIG instance.

You can learn more about in its JupyterLab documentation link

Key Points

  • Check JupyterLab documentation