![python - How can I decrease Dedicated GPU memory usage and use Shared GPU memory for CUDA and Pytorch - Stack Overflow python - How can I decrease Dedicated GPU memory usage and use Shared GPU memory for CUDA and Pytorch - Stack Overflow](https://i.stack.imgur.com/vTJJ1.png)
python - How can I decrease Dedicated GPU memory usage and use Shared GPU memory for CUDA and Pytorch - Stack Overflow
![ok so the task manager says my dedicated gpu memory is 4gb and it says my shared gpu memory is 8gb and my gpu memory is 12gb? (you can look at the ok so the task manager says my dedicated gpu memory is 4gb and it says my shared gpu memory is 8gb and my gpu memory is 12gb? (you can look at the](https://preview.redd.it/ytxdofbdo5u71.png?width=640&crop=smart&auto=webp&s=04354de6af605d182315fba5969fbc1204a16589)
ok so the task manager says my dedicated gpu memory is 4gb and it says my shared gpu memory is 8gb and my gpu memory is 12gb? (you can look at the
![How do I increase the shared GPU memory allocation multiplicator? - CUDA Programming and Performance - NVIDIA Developer Forums How do I increase the shared GPU memory allocation multiplicator? - CUDA Programming and Performance - NVIDIA Developer Forums](https://global.discourse-cdn.com/nvidia/original/3X/d/5/d50b9a81fe57b2f3b1a075c9bf50f39cc9dd5241.png)
How do I increase the shared GPU memory allocation multiplicator? - CUDA Programming and Performance - NVIDIA Developer Forums
![Force Full Usage of Dedicated VRAM instead of Shared Memory (RAM) · Issue #45 · microsoft/tensorflow-directml · GitHub Force Full Usage of Dedicated VRAM instead of Shared Memory (RAM) · Issue #45 · microsoft/tensorflow-directml · GitHub](https://user-images.githubusercontent.com/15016720/93714923-7f87e780-fb2b-11ea-86ff-2f8c017c4b27.png)