Implementing Log4J

User log4j and commons-logging.jar in the class path. These jar files contain the information for logging set up.Declare the following as a class

attribute in the class where you want to implement the logging, say SomeClass.java

private static Logger logger = Logger.getLogger

(com.mattiz.SomeClass.class);

You would have a sample log4j properties configuration file which you should put in your classpath.
Suppose you wanted the line “Test Debug” to go to your log

file use logger.error or logger.debug or logger.warn.
Viz. logger.error(“Test Debug”);

For warning/ debugging messages you could use

logger.warn(“This is a warning”);

or

logger.debug(“Variable value is”+var);

In the log4j configuration file you can set the level of logging.In the sample configuration file you will find words like ERROR,WARN or DEBUG.If you
set the level to WARN, then only warnings and errors will print, while debug won’t print. If you set the level to DEBUG level then debug, warn and error
will print.These may go to the console or to a separate log output file based on the settings in the log4j configuration file. The log4j.properties contains
settings to “rollover” files. If file becomes more than 1 MB then copying to another file and creating a new file is done automatically by log4j. With the
settings in the log4j.properties file you can control

what is being logged. By setting it to ERROR level, debug statements won’t be printed in the production system.For development use DEBUG level because you
want to see debug messages.

The hierarchy is

DEBUG < INFO < WARN < ERROR < FATAL

If set to WARN level, all warn, error and fatal messages will be printed but no info and debug messages will be printed.You can set logging levels for each
package. So if you are working on a certain package you can set the package’s logging level to DEBUG, and another package’s logging level to WARN in the
log4j properties file.

For example

log4j.category.com.mattiz.security = WARN

You need not use all the debugging levels; mostly people use

DEBUG and ERROR. You can put error level messages inside the catch block.Error logs are like this logger.error(“Exception critical”+ex.toString());You can do this for production systems.

Contents of a simple log4j.properties configuration file

# Print FATAL, ERROR and WARN messages – do not print

DEBUG and INFO messages

# the sequence is FATAL > ERROR > WARN > DEBUG > INFO

# since the level is set to WARN – message levels above it will

be printed

# while levels below it will not be printed

log4j.rootCategory=WARN, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender

log4j.appender.stdout.layout=org.apache.log4j.PatternLayout

# the conversion pattern will be used to format the timestamp

see below example

# 2004-05-13 17:15:13,318 [Servlet.Engine.Transports : 1]

DEBUG

# this will pre-pend all logging messages

log4j.appender.stdout.layout.ConversionPattern=%d [%t] %-

5p %c{1} - %m%n

Contents of a more elaborate log4j.properties file

# the general level is set to WARN

# WARN, ERROR, FATAL will be printed

# In addition to printing to System Out, also print

to "RollingFile"

log4j.rootCategory=WARN, stdout, RollingFile

# the level for the com.mattiz.web package (and subpackages)

is set to DEBUG

# DEBUG, WARN, ERROR, FATAL will be printed for the web package

log4j.category.com.mattiz.web=DEBUG

log4j.appender.stdout=org.apache.log4j.ConsoleAppender

log4j.appender.stdout.layout=org.apache.log4j.PatternLayout

# Print the date and time for systemOut

log4j.appender.stdout.layout.ConversionPattern=%d [%t] %-5p %c{1} - %m%n

# Save log to Rolling File Also

log4j.appender.RollingFile=org.apache.log4j.RollingFileAppender

# Location of rolling file

log4j.appender.RollingFile.File=d:/mattiz/mattiz.log

# if the file becomes greater than 500KB then create a new file

and backup the old file

log4j.appender.RollingFile.MaxFileSize=500KB

# Keep 5 back up files

log4j.appender.RollingFile.MaxBackupIndex=5

log4j.appender.RollingFile.layout=org.apache.log4j.PatternLayout

#Print the date and time for RollingFile

log4j.appender.RollingFile.layout.ConversionPattern=%d [%t]

%-5p %c{1} - %m%n

HashMap vs Hashtable

HashMap vs Hashtable

Hashtable and HashMap are both key-value based data structure. Both of them allows the access data based on key.
Both of them some differences in storing values and performance over iteration.

Some of the basic differences are following

HashMap HashTable
Synchronized Un-synchronized  Synchronized
Allow null Allowed Null for key and Value Not allowed, Null pointer would be thrown for Null
 Support in JDK since 1.2  1.0
 Subclass of Dictionary AbstractMap
 load factor  .75 .75

Hashtable

Hashtable performance is effected by its initial capacity and load factor provided at creation of object. In case of high load factor it grows it self.
At the time of creation there should not be too much initial capacity other-wise it would wastage of space. Higher the value of load factor will save to space, but lower the load factor will increase the time for searching entry.

Hashmap

Hashmap provide no guarantees to the order of the map. Hashmap is not synchronized, It means that multiple thread can access the same instance at same time, which can lead to
structal modification.

HashMap is better for non-threaded applications, as unsynchronized Objects perform better than synchronized ones.

Oracle ADF interview Question part-2

Hi all , here again I came up with most awaited article .This is part oracle Adf interview Question part-two.I have accumulated ,formulated, and gathered these information from various sources.So that it would be helpful for Oracle ADF community.

But I am afraid of Mr. Saravanan of jdeveloperandadf.blogspot.com ,who again not try to copy paste the content of my post as he did for part -1.Even after my many reply to him.he don’t bothered to give original URL or remove the content of my blog.

You can see

Original post (my blog)-https://www.techartifact.com/blogs/2011/04/oracle-adf-interview-question-part-1.html

he copied the content-http://jdeveloperandadf.blogspot.com/2011/02/oracle-adf-interview-questions-and.html

Well these are following Question  for part -2

Q:Describe Oracle ADF Architecture?

Ans:In line with community best practices, applications you build using the Fusion web technology stack achieve a clean separation of business logic, page navigation, and user interface by adhering to a model-view-controller architecture. As shown in in an MVC architecture:

The model layer represents the data values related to the current page The view layer contains the UI pages used to view or modify that data The controller layer processes user input and determines page navigation.The business service layer handles data access and encapsulates business logic


Oracle ADF Business Components, which simplifies building business services.

Oracle ADF Faces rich client, which offers a rich library of AJAX-enabled UI components for web applications built with JavaServer Faces (JSF).

Oracle ADF Controller, which integrates JSF with Oracle ADF Model. The ADF Controller extends the standard JSF controller by providing additional functionality, such as reusable task flows that pass control not only between JSF pages, but also between other activities, for instance method calls or other task flows.

Simple Oracle ADF Architecture

Q: What is Association and Viewlink ?

Ans: They define the join or the link among EO’s and VO’s.Association defines link between EO’s.They can be considered as PrimaryKey/ForeignKey relationship between tables.
The Viewlink is for a VO.It defines the Join conditions.A viewlink can be based on an association or based on attributes,Basing viewlinks on associations have the same advantage of entity cache and few more which are unveiled later.

Q:What is  the Business Component Tester

ans:The mostly used component of the model layer is the tester, which is used to run and check the data model that is implemented.This serves as the first line of defense to see if data is exposed as we need it and to test the data model with out a need to create a UI.

Q: What is task flow?

Ans: ADF task flows provide a modular approach for defining control flow in an application.Instead of representing an application as a single large JSF page flow, you can break it up into a collection of reusable task flows. Each task flow contains a portion of the application’s navigational graph. The nodes in the task flows are activities. An activity node represents a simple logical operation such as displaying a page, executing application logic, or calling another task flow. The transactions between the activities are called control flow cases

https://www.techartifact.com/blogs/2011/07/basic-of-task-flow-in-oracle-adf.html

 Q: Advantage of Task Flow Over JSF flow?

 Ans: ADF task flows offer significant advantages over standard JSF page flows

  • The application can be broken up into a series of modular flows that call one another.
  • You can add to the task flow diagram nodes such as views, method calls, and calls to other task flows.
  • Navigation is between pages as well as other activities, including routers.
  • ADF task flows are reusable within the same or an entirely different application.After you break up your application into task flows, you may decide to reuse task
  • Shared memory scope (for example, page flow scope) enables data to be passed between activities within the task flow. Page flow scope defines a unique storage area for each instance of an ADF bounded task flow.

  Q:   What are type of task flow?

 Ans: The two types of ADF task flow are:

■ Unbounded task flow: A set of activities, control flow rules, and managed beans that interact to allow a user to complete a task. An ADF unbounded task flow consists of all activities and control flows in an application that are not included within any bounded task flow.

■ Bounded task flow: A specialized form of task flow that, in contrast to an unbounded task flow, has a single entry point and zero or more exit points. It contains its own set of private control flow rules, activities, and managed beans. An ADF bounded task flow allows reuse, parameters, transaction management,and reentry. An ADF bounded task flow is used to encapsulate a reusable portion of an application. A bounded task flow is similar to a Java method in that it:

■ Has a single entry point

■ May accept input parameters

■ May generate return values

■ Has its own collection of activities and control flow rules

■ Has its own memory scope and managed bean lifespan (a page flow scope instance)

A bounded task flow can call another bounded task flow, which can call another and so on. There is no limit to the depth of the calls.The checkout process is created as a separate ADF bounded task flow, as shown

Q: What are different memory scope in ADF Managed Beans.?

Ans: Please read this link

https://www.techartifact.com/blogs/2012/07/different-memory-scope-in-oracle-adf.html

Q: What is region in Task Flow?

Ans: You can render a bounded task flow in a JSF page or page fragment (.jsff) by using an ADF region. An ADF region comprises the following. You create an ADF region by dragging and dropping a bounded task flow that contains at least one view activity or one task flow call activity to the page where you want to render the ADF region. This makes sure that the ADF region you create has  content to display at runtime

Q: What is Association Accessor?

Ans: It’s an operation by which an entity instance at one end of and association can access the related entity  object  instance at the other end of the association. An Accessor that travels from destination to source is called a source accessor  and an accessor that travel from source to destination is called a destination accessor.

It is described in the entity object definition xml files which can be used by view object and view link definition to specify cross entity relationship. Its return type will be the entity object class of associated entity object definition or ‘EntityImpl’ if the associated entity object definition has no entity object class.

Q: What are different data control scope?

Ans:

1)     Isolated:

2)    Shared(Default)-Data is shared with the parent flow.

Q: What are different Task Flow Component?

Ans:
https://www.techartifact.com/blogs/2012/07/q-what-are-different-task-flow-component.html

      Q: What is application module pooling and how we can handle it.

  Ans: Still trying to find more info in this .

But as of now you can go http://andrejusb.blogspot.com/2010/02/optimizing-oracle-adf-application-pool.html

Hey I am not able to accumulate all question which can be part of this thread.If you have more question .You can put in comments.I will edit this article and we can have collect all in single place.