• mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    I’d say about 99% is the same.

    Two notable things that were different were:

    • Podman config file is different which I needed to edit where containers are stored since I have a dedicated location I want to use
    • The preferred method for running Nvidia GPUs in containers is CDI, which imo is much more concise than Docker’s Nvidia GPU device setup.

    The second one is also documented on the CUDA Container Toolkit site, and very easy to edit a compose file to use CDI instead.

    There’s also some small differences here and there like podman asking for a preferred remote source instead of defaulting to dockerhub.