Useful or not, from you.
Real-Time-Voice-Cloning PyTorch installation is not configured to use CUDA

Upon cloning of the repo and installation of all dependencies, I am running python demo_cli.py and get the following output:

WARNING: Logging before flag parsing goes to stderr. W0907 20:44:16.745634 4394239424 deprecation_wrapper.py:119] From /Users/yev/Sites/voice/synthesizer/models/modules.py:91: The name tf.nn.rnn_cell.RNNCell is deprecated. Please use tf.compat.v1.nn.rnn_cell.RNNCell instead.

Arguments:

enc_model_fpath:   encoder/saved_models/pretrained.pt
syn_model_dir:     synthesizer/saved_models/logs-pretrained
voc_model_fpath:   vocoder/saved_models/pretrained/pretrained.pt
low_mem:           False
no_sound:          False

Running a test of your configuration...

Your PyTorch installation is not configured to use CUDA. If you have a GPU ready for deep learning, ensure that the drivers are properly installed, and that your CUDA version matches your PyTorch installation. CPU-only inference is currently not supported.

I believe the main takeaway is Your PyTorch installation is not configured to use CUDA

I have both PyTorch and CUDA installed but not sure how to configure PyTorch to use CUDA.

I am trying to run this repo on a Macbook Pro running macOS Mojave 10.14.5

This machine does NOT comes with an Nvidia Graphics card. The graphics card installed is the following: Radeon Pro 560 4GB I dont think you can run CUDA without an Nvidia graphics card.

Is it only possible to run this application on a machine with an Nvidia graphics card?

That's a useful answer
Without any help

@yevgetman I'm not associated with this project at all but you are correct in that the Macbook does NOT have an nvidia GPU which means no CUDA. You wouldn't want to train any of the models without a GPU but I would assume you could run inference (at reduced performance) on a CPU.

That assumption of running inference on the CPU vs GPU would require you to make quite a few changes to the code base and dependencies. This is coming from my very high-level scan of this project and dependencies. It would be much easier to rent an instance from Google/Microsoft/Amazon which has an Nvidia GPU.

See issue #54