Java Asynchronous Programming
A teaser and precursor to Reactive programming, Java Asynchronous programming allows efficient use of CPU and provides scalability.
Channel
----------------------------------
Master difficult programming concepts in few minutes. I try to explain difficult concepts like Java concurrency in simple to understand manner using animations and small code snippets. Explore videos on topics like Spring Boot, Cloud Foundry, Java 8 (with more coming soon). I am happy to clarify your doubts. Ask me anything in the comments. I am also happy to take requests for new videos.
New video added every Sunday.
Subscribe or explore the channel - http://bit.ly/defog_tech
Playlists
----------------------------------
Java Executor Service - http://bit.ly/exec_srvc
Java Concurrency - http://bit.ly/java_crncy
Spring Boot 2.0 - http://bit.ly/spr_boot2
Java 8 - http://bit.ly/java_8-11
Intellij IDEA Shortcuts - http://bit.ly/i_idea
Popular Videos
----------------------------------
Executor Service - https://youtu.be/6Oo-9Can3H8
Introduction to CompletableFuture - https://youtu.be/ImtZgX1nmr8
Understand how ForkJoinPool works - https://youtu.be/5wgZYyvIVJk
Java Memory Model in 10 minutes - https://youtu.be/Z4hMFBvCDV4
Volatile vs Atomic - https://youtu.be/WH5UvQJizH0
What is Spring Webflux - https://youtu.be/M3jNn3HMeWg
Video Summary & Chapters
No chapters for this video generated yet.
Video Transcript
Most of the compute devices we have today have more than one cores.
So typical desktop computer has four cores and on the servers
we generally have 16 cores or 32 cores or more. To take advantage of this much compute power
we create multiple parallel threads in our application.
So we can do that using new thread.start, thread pool or fork join pool and so on and so forth in Java.
But creating too many parallel threads can cause a problem and that is because of how the way Java works.
In Java, every thread that you create is actually an operating system thread, which is also
called as native thread or kernel threads.
So Java itself will have variables for every thread like program counter, Java stacks, the stack frames and so on and so forth
But for every thread there will be a corresponding OS thread which consumes a lot of memory
That limits the number of parallel threads or number of active threads that you can have in your JVM
That is in your application
So you cannot have tens of thousands of active threads in Java
It will throw an out of memory exception and your program will shut down and once you have too many threads
there are other problems that come up. So let's say you have a lot of threads and
you have a CPU which has only two cores. So every core will have some local cache
and let's say this core 1 is running a version of thread 1 and the local cache
has all the data which is required by thread 1. Now if there are a lot of
threads that means you have to schedule some other thread at some point in time.
So let's say here you want to schedule thread 3. For that you'll have to flush
the cache, that is you have to remove all the data which belong to thread 1 and
you have to put all the data which will be required by thread 3 and then the
core 1 can remove or stop thread 1 and can start the thread 3. Again when there
is a context switch, when it switches back to thread 1, again it has to reverse
that operation of flushing the local cache, removing all the data for thread 3
and adding the data for thread 1 back. So that is called data locality. So when
you have a lot of context switches there is a data locality issue and you have to
keep flushing the cache and adding the new cache which adds overhead. There is
also a problem of scheduling so now if you have hundreds and thousands of
threads your OS and your JVM will have a scheduling overhead which itself takes a
lot of time. This issue of having too many threads is even more heightened
when you're trying to do IOP operations. So let's say you have a main thread and
it's trying to do some operation related to the IO.
It could be a file IO operation, it could be a network operation.
Within the network itself, you can have some HTTP call to a microservice,
or you can have some database operation done.
Now, since this is a network operation or a file IO operation,
it's going to take some time.
And when it triggers that particular operation,
the main thread will go into a wait state.
Until that operation is completed,
the main thread cannot do anything else,
and all your CPU cycles are wasted. Once a network I or the net file I is
completed then the thread goes back into a runnable state and it can process the
data that was returned by this I operation. And this problem of having a
blocking thread which does not do any other operations while that operation is
being performed limits
your capacity to scale your I O in your apps. So you cannot have thousands and
thousands of apps doing I operations because every one of them will block and
your CPU is not being used efficiently. So what you ideally want is a non