Oracle ADF – Passing Data within a Task Flow- Techartifact

Passing Data within a Task Flow –
It is a common use case to pass parameters from ADF Task Flow with fragments into another ADF Task Flow without fragments.f the bounded task flows referenced by ADF regions share the data control scope and the transaction – we can access parameter directly through Expression Language. If data control scope is not shared – probably you will need to use ADF Contextual Event Framework.

Source- www.oracle.com -video upload by ADFInsiderEssentials on youtube

JSP Life cycle

Life cycle of a JSP page consists of two phases, translation phase and execution phase. Every JSP is a Servlet, a JSP page is translated and compiled into servlet and the resulting servlet handles the request, So life cycle of a JSP page largely depends on the Servlet API.

JSP engine does the following 7 phases.

• Page translation: page is parsed, and a java file which is a servlet is created.
• Page compilation: page is compiled into a class file
• Page loading : This class file is loaded.
• Create an instance : Instance of servlet is created
• jspInit() method is called
• jspService is called to handle service calls
• jspDestroy is called to destroy it when the servlet is not required.

Translation phase

During the translation phase, JSP page is translated into a servlet class. The entire static markup in the JSP page is converted into the code that writes the data to response stream. If you look at the source of the generated Servlet class you will find calls to the out.write() which write data to ServletOutputStream.

If you have following HTML code into your JSP page
JSP life cycle tutorial
It will generate code like
out.write(“JSP life cycle tutorial”)

During the translation phase JSP elements are treated as follows:
• JSP directives controls the behavior of the resultant servlet.
• Scripting elements results into the equivalent Java code.
• Custom tags are converted into the code that calls method on tag handlers.

JSP Page Compilation:

The generated java servlet file is compiled into a java servlet class.
Note: The translation of a JSP source page into its implementation class can happen at any time between initial deployment of the JSP page into the JSP container and the receipt and processing of a client request for the target JSP page.


Class Loading:

The java servlet class that was compiled from the JSP source is loaded into the container

Execution phase

JSP life cycle’s execution phase is almost similar to that of the Servlet life cycle, because ultimately it’s a servlet which is being executed. The Servlet class generated at the end of the translation phase represents the contract between container and the JSP page. According to the JSP specification the servlet class must implement the HttpJspPage interface which defines the life cycle methods.
JSP life cycle includes three methods jspInit(), _jspService() and jspDestroy()

Initialization:

jspInit() method is called immediately after the instance was created. It is called only once during JSP life cycle.
_jspService() execution:
This method is called for every request of this JSP during its life cycle. This is where it serves the purpose of creation. Oops! it has to pass through all the above steps to reach this phase. It passes the request and the response objects. _jspService() cannot be overridden.
jspDestroy() execution:
This method is called when this JSP is destroyed. With this call the servlet serves its purpose and submits itself to heaven (garbage collection). This is the end of jsp life cycle.
jspInit(), _jspService() and jspDestroy() are called the life cycle methods of the JSP.

The HttpJspPage Interface

The javax.servlet.jsp.HttpJspPage contains three methods

Public void jspInit()

This method is invoked when the JSP page is initialized. This method is similar to init() method in servlet. If you want to provide initialization for a JSP page, you define this method in declaration part of the JSP page. But most of the time you will not need to define this method.

public void _jspService
void _jspService(HttpServletRequest request, HttpServletResponse response)ThrowsIOException, ServletException
This method represents the body of the JSP page and invoked at each client request. This method is similar to service() method in servlet.
Note: You should never provide implementation _jspService() method as web container automatically generates this method based on the content of the JSP page

Public void jspDestroy()

This method is invoked just before the JSP page is destroyed by the web container. This method is similar to destroy() method in servlet. If you want to provide some cleanup code, you can define this method in declaration part of the JSP page.

Understanding the Servlet life cycle with an example

This example explains how to execute code at JSP initialization phase and destroy phase of JSP Life Cycle.
It is a simple request counter that displays how many time the page has been called or how many requests the page has received. The page declares a counter variable of type integer, the counter is set to zero when the page is initialized. Each time the page receives a request, the value of counter variable is incremented by one and displayed to user. During each life cycle stage, A message is logged to server log.
lifecycle.jsp

	<%! 
	int counter;
	public void jspInit() {
		counter = 0;
		log("The lifecycle jsp has been initialized");
	}		
	%>	
 
<html>
	<head>
		<title>JSP Life Cycle Example</title>	
	</head>
	<body>
		<%
		log("The lifecycle jsp has received a request");
		counter++;
		%>		
		<h2>JSP Life cycle : Request counter</h2>
		<p>This page has been called <%=counter %> times </p>
	</body>
</html>
	<%!
		public void jspDestroy() {
		log("The lifecycle jsp is being destroyed");
	}
	%>

Pin it

How HashMap work in Java

“Hash Map is a Hash table based implementation of the Map interface. This implementation provides all of the optional map operations, and permits null values and the null key. (The HashMap class is roughly equivalent to Hashtable, except that it is unsynchronized and permits nulls.) This class makes no guarantees as to the order of the map; in particular, it does not guarantee that the order will remain constant over time.” … from Java API

After reading this definition, some question comes in my mind:

How hash map store data internally?
What happen when I try to store some new information in map?
How hash map find my data?

And when I tried to explore it, I find it more and more interesting.

HashMap has a static class named Entry which implements Map.Entry interface. The Entry class looks like:

static class Entry implements Map.Entry {
final Object key;
Object value;
final int hash;
Entry next;
Entry(int i, Object obj, Object obj1, Entry entry) {
value = obj1;
next = entry;
key = obj;
hash = i;

}
// Other methods

}

Every time we insert ainto hashmap using .put() method, a new Entry object is created (not true is some cases. if key already exists, then it just replace the value). Map internally used two data structures to manage/store data:

Array
Link List

This image shows how hashmap manage data. Here

Each index of array is a bucket
To identify the bucket for any , Hash map use key.hashCode() and perform some operation:
Bucket (index) =HashMap.indexFor (HashMap.hash(key.hashCode()), entryArray.length)
It means, two keys with different hashCode can fall under same bucket.
If a bucket is empty (table[i] is null) then the Entry object is simply inserted at ith position
table[i] = new Entry(hash, key, value, null)
If a bucket has some values, then the following algo is used:
Entry entry = table[i]

Table[i] = new Entry(hash,key,value,entry)

It means, the latest entry resides on the top of the bucket.

Load Factor- load factor is the ratio of number of keys to the length of an array. You will now have a threshold (the maximum number of elements that can be stored.The load factor is how full the HashMapis allowed to get before the capacity is doubled. The default load factor of 0.75 means that the HashMap is allowed to reach 75% capacity before it calls the rehash() method and doubles in capacity.The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed,so that the hash table has been increased double the number of buckets.

Generally the default load factor of a hashtable=0.75″ is if the hashtable is 75% full, then it will be re-hashed twice of the initial capacity.
Higher values decrease the space overhead but increase the lookup cost. The expected number of entries in the map and its load factor should be taken into account when setting its initial capacity, so as to minimize the number of rehash operations. If the initial capacity is greater than the maximum number of entries divided by the load factor, no rehash operations will ever occur.

RACE CONDITION ON HASHMAP IN JAVA

Race condition exists while resizing hashmap in Java. If two threads, at the same time, find that Hashmap needs resizing, they both try resizing the hashMap.
In the process of resizing of hashmap, the element in bucket(which is stored in linked list) get reversed in order during the migration to new bucket, because java hashmap doesn’t append the new element at tail, instead it appends the new element at head to avoid tail traversing.
If race condition happens then you will end up with an infinite loop.

WHAT WILL HAPPEN IF TWO DIFFERENT HASHMAP KEY OBJECTS HAVE SAME HASHCODE?
COLLISION OCCURS-Since hashcode() is same, bucket location would be same and collision occurs in hashMap.Since HashMap use a linked list to store in bucket, “Key and Value” object will be stored in next node of linked list.

COLLISION RESOLUTION

We can find the bucket location by calling the hasCode function on the key object.After finding bucket location, we will call keys.equals() method to identify correct node in linked list and return associated value object for that key in HashMap.

 

 

 

Ref- http://mkbansal.wordpress.com/2010/06/24/hashmap-how-it-works/