It’s been some time since the last post where I spoke about some of the cool features introduced in BW 6.6. and BWCE 2.5.0. One of them was Memory Saving Mode. In this post I’ll dive deeper into the feature.
Memory Saving Mode simply put is the ability to free up memory more quickly thus having Java Garbage Collection to kick in a bit more frequently. This in turn requires less overall memory usage. This feature is an opt-in feature and not turned On by default. So how can the BW Engine decide when to free up memory – it’s simple, by looking at the process diagram. You can enable Memory Saving Mode at Design time by following the documentation here.
This will add new Memory saving mode variables to the Process Diagram. The engine reads these variables to figure out when to ‘free up’ activity inputs and outputs so that they can be garbage collected. Note that this is done while the job is running (and not completed).
So how can the users benefit?
Sample Scenario
The above process is a REST Service which reads data from a file, parses it and has a Sleep (3 sec) to simulate a backend call which uses the XML data from the file. For this test, the file is 9MB in size and you could end up having to parse multiple files. You could easily replace the file parsing with say a JDBC Call which returns similar large data. Such scenarios are quite common for enterprises. This data is only meant as input to the backend call, it’s not used after the ServiceCall activity.
With Memory Saving Mode enabled, for a 20 user concurrent test (using Jmeter) here is how the Memory looks like when we run it in Docker for ~ 3 mins.
As we can see the Heap used is always <1GB and the GC is kicking in a steady manner but is throughout the execution.
The same test without Memory Saving Mode does not run in 1GB heap and JVM throws Out Of Memory error. Using the MEMORY_LIMIT environment variable, I set the limit as 4096M and 2.8GB heap was dynamically assigned by BWCE. Below is the Memory graph from this container.
As you can see the GC is much more erratic here and we are using all of the 2.8GB of heap. With more tuning, we could get this between 2-2.8GB to find the sweet spot. However the fact is that this case fails to run in 1GB heap.
How should I use it then?
Simply put, optimize your processes. Don’t carry forward entire outputs e.g. a large JDBC Resultset or a REST API Response just for a few elements. Use local process variables to only store what’s required thereby enabling the engine to remove these outputs so that Java can free up the memory. Remember we can only ‘release’ objects to be garbage collected. We cannot force a GC cycle. This will help users optimize their memory usage and help them run the same workloads using lesser resources while maintaining similar throughputs. Remember the better designed your processes the lesser memory you can use for your applications!
WAHEED
Hello Aditya,
I would like to thank you for your useful posts.
For BW6, I need your help to share or list guide lines or best practice about how to plan and archirect TIBCO BE6.6 env in terms of how many Appnodes and appsapaces requied and sizing (cpu cores, memory,..) , best practice of how many applications per appnode.. in other words, the best practice to build high performace and flixible with less problems.The application to be built is a ESB system using Tibco bw6 and ems