
G = Generator(z_size=z_size, conv_dim=conv_dim) Self.dc3 = deconv(in_channels = conv_dim, out_channels = 3, kernel_size=4, stride=2, padding=1, batch_norm=False) Self.dc2 = deconv(in_channels = conv_dim*2, out_channels = conv_dim, kernel_size=4, stride=2, padding=1, batch_norm=True) Self.dc1 = deconv(in_channels = conv_dim*4, out_channels = conv_dim*2, kernel_size=4, stride=2, padding=1, batch_norm=True) Self.fc1 = nn.Linear(in_features = z_size, out_features = 4*4*conv_dim*4) Self.fc1 = nn.Linear(in_features = 4*4*conv_dim*4, out_features = 1, bias=True) If Relevant: G and D are Discriminator and Generators for GAN's. If we want to move models to M1 GPU and our tensors to M1 GPU, and train entirely on M1 GPU, what should we be doing? However, this will not work on M1 chips, since there is no CUDA. The script moves the models to GPU using the following code: G.cuda()

I am trying to execute a script from Udacity's Deep Learning Course available here.

$ pip install -pre torch torchvision torchaudio -extra-index-url I followed the following process to set up PyTorch on my Macbook Air M1 (using miniconda).

On 18th May 2022, PyTorch announced support for GPU-accelerated PyTorch training on Mac.
