Methods Summary |
---|
public oracle.toplink.essentials.sessions.UnitOfWork | acquireUnitOfWork()PUBLIC:
Nested units of work are not supported in TopLink Essentials.
throw ValidationException.notSupported("acquireUnitOfWork", getClass());
|
public void | addNewAggregate(java.lang.Object originalObject)INTERNAL:
Register a new aggregate object with the unit of work.
getNewAggregates().put(originalObject, originalObject);
|
public void | addObjectDeletedDuringCommit(java.lang.Object object, oracle.toplink.essentials.descriptors.ClassDescriptor descriptor)INTERNAL:
Add object deleted during root commit of unit of work.
// The object's key is keyed on the object, this avoids having to compute the key later on.
getObjectsDeletedDuringCommit().put(object, keyFromObject(object, descriptor));
//bug 4730595: changed to add deleted objects to the changesets.
((UnitOfWorkChangeSet)getUnitOfWorkChangeSet()).addDeletedObject(object, this);
|
public void | addPessimisticLockedClone(java.lang.Object clone)INTERNAL:
log(SessionLog.FINEST, SessionLog.TRANSACTION, "tracking_pl_object", clone, new Integer(this.hashCode()));
getPessimisticLockedObjects().put(clone, clone);
|
public void | addReadOnlyClass(java.lang.Class theClass)PUBLIC:
Adds the given Java class to the receiver's set of read-only classes.
Cannot be called after objects have been registered in the unit of work.
if (!canChangeReadOnlySet()) {
throw ValidationException.cannotModifyReadOnlyClassesSetAfterUsingUnitOfWork();
}
getReadOnlyClasses().put(theClass, theClass);
ClassDescriptor descriptor = getDescriptor(theClass);
// Also mark all subclasses as read-only.
if (descriptor.hasInheritance()) {
for (Enumeration childEnum = descriptor.getInheritancePolicy().getChildDescriptors().elements();
childEnum.hasMoreElements();) {
ClassDescriptor childDescriptor = (ClassDescriptor)childEnum.nextElement();
addReadOnlyClass(childDescriptor.getJavaClass());
}
}
|
public void | addReadOnlyClasses(java.util.Vector classes)PUBLIC:
Adds the classes in the given Vector to the existing set of read-only classes.
Cannot be called after objects have been registered in the unit of work.
for (Enumeration enumtr = classes.elements(); enumtr.hasMoreElements();) {
Class theClass = (Class)enumtr.nextElement();
addReadOnlyClass(theClass);
}
|
public void | addRemovedObject(java.lang.Object orignal)INTERNAL:
Register that an object was removed in a nested unit of work.
getRemovedObjects().put(orignal, orignal);// Use as set.
|
public void | afterTransaction(boolean committed, boolean isExternalTransaction)INTERNAL:
Called after transaction is completed (committed or rolled back)
if (!committed && isExternalTransaction) {
// In case jts transaction was internally started but rolled back
// directly by TransactionManager this flag may still be true during afterCompletion
getParent().setWasJTSTransactionInternallyStarted(false);
//bug#4699614 -- added a new life cycle status so we know if the external transaction was rolledback and we don't try to rollback again later
setLifecycle(AfterExternalTransactionRolledBack);
}
if ((getMergeManager() != null) && (getMergeManager().getAcquiredLocks() != null) && (!getMergeManager().getAcquiredLocks().isEmpty())) {
//may have unreleased cache locks because of a rollback...
getParent().getIdentityMapAccessorInstance().getWriteLockManager().releaseAllAcquiredLocks(getMergeManager());
this.setMergeManager(null);
}
getParent().afterTransaction(committed, isExternalTransaction);
|
public void | assignSequenceNumber(java.lang.Object object)ADVANCED:
Assign sequence number to the object.
This allows for an object's id to be assigned before commit.
It can be used if the application requires to use the object id before the object exists on the database.
Normally all ids are assigned during the commit automatically.
//** sequencing refactoring
startOperationProfile(SessionProfiler.AssignSequence);
try {
ObjectBuilder builder = getDescriptor(object).getObjectBuilder();
// This is done outside of a transaction to ensure optimial concurrency and deadlock avoidance in the sequence table.
if (builder.getDescriptor().usesSequenceNumbers() && !getSequencing().shouldAcquireValueAfterInsert(object.getClass())) {
Object implementation = builder.unwrapObject(object, this);
builder.assignSequenceNumber(implementation, this);
}
} catch (RuntimeException exception) {
handleException(exception);
}
endOperationProfile(SessionProfiler.AssignSequence);
|
public void | assignSequenceNumbers()ADVANCED:
Assign sequence numbers to all new objects registered in this unit of work,
or any new objects reference by any objects registered.
This allows for an object's id to be assigned before commit.
It can be used if the application requires to use the object id before the object exists on the database.
Normally all ids are assigned during the commit automatically.
// This should be done outside of a transaction to ensure optimal concurrency and deadlock avoidance in the sequence table.
// discoverAllUnregisteredNewObjects() should be called no matter whether sequencing used
// or not, because collectAndPrepareObjectsForCommit() method (which calls assignSequenceNumbers())
// needs it.
// It would be logical to remove discoverAllUnregisteredNewObjects() from assignSequenceNumbers()
// and make collectAndPrepareObjectsForCommit() to call discoverAllUnregisteredNewObjects()
// first and assignSequenceNumbers() next,
// but assignSequenceNumbers() is a public method which could be called by user - and
// in this case discoverAllUnregisteredNewObjects() is needed again (though
// if sequencing is not used the call will make no sense - but no harm, too).
discoverAllUnregisteredNewObjects();
Sequencing sequencing = getSequencing();
if (sequencing == null) {
return;
}
int whenShouldAcquireValueForAll = sequencing.whenShouldAcquireValueForAll();
if (whenShouldAcquireValueForAll == Sequencing.AFTER_INSERT) {
return;
}
boolean shouldAcquireValueBeforeInsertForAll = whenShouldAcquireValueForAll == Sequencing.BEFORE_INSERT;
startOperationProfile(SessionProfiler.AssignSequence);
Enumeration unregisteredNewObjectsEnum = getUnregisteredNewObjects().keys();
while (unregisteredNewObjectsEnum.hasMoreElements()) {
Object object = unregisteredNewObjectsEnum.nextElement();
if (getDescriptor(object).usesSequenceNumbers() && ((!isObjectRegistered(object)) || isCloneNewObject(object)) && (shouldAcquireValueBeforeInsertForAll || !sequencing.shouldAcquireValueAfterInsert(object.getClass()))) {
getDescriptor(object).getObjectBuilder().assignSequenceNumber(object, this);
}
}
Enumeration registeredNewObjectsEnum = getNewObjectsCloneToOriginal().keys();
while (registeredNewObjectsEnum.hasMoreElements()) {
Object object = registeredNewObjectsEnum.nextElement();
if (getDescriptor(object).usesSequenceNumbers() && ((!isObjectRegistered(object)) || isCloneNewObject(object)) && (shouldAcquireValueBeforeInsertForAll || !sequencing.shouldAcquireValueAfterInsert(object.getClass()))) {
getDescriptor(object).getObjectBuilder().assignSequenceNumber(object, this);
}
}
endOperationProfile(SessionProfiler.AssignSequence);
|
protected void | basicPrintRegisteredObjects()INTERNAL:
Print the objects in the unit of work.
String cr = Helper.cr();
StringWriter writer = new StringWriter();
writer.write(LoggingLocalization.buildMessage("unitofwork_identity_hashcode", new Object[] { cr, String.valueOf(System.identityHashCode(this)) }));
if (hasDeletedObjects()) {
writer.write(cr + LoggingLocalization.buildMessage("deleted_objects"));
for (Enumeration enumtr = getDeletedObjects().keys(); enumtr.hasMoreElements();) {
Object object = enumtr.nextElement();
writer.write(LoggingLocalization.buildMessage("key_identity_hash_code_object", new Object[] { cr, Helper.printVector(getDescriptor(object).getObjectBuilder().extractPrimaryKeyFromObject(object, this)), "\t", String.valueOf(System.identityHashCode(object)), object }));
}
}
writer.write(cr + LoggingLocalization.buildMessage("all_registered_clones"));
for (Enumeration enumtr = getCloneMapping().keys(); enumtr.hasMoreElements();) {
Object object = enumtr.nextElement();
writer.write(LoggingLocalization.buildMessage("key_identity_hash_code_object", new Object[] { cr, Helper.printVector(getDescriptor(object).getObjectBuilder().extractPrimaryKeyFromObject(object, this)), "\t", String.valueOf(System.identityHashCode(object)), object }));
}
log(SessionLog.SEVERE, SessionLog.TRANSACTION, writer.toString(), null, null, false);
|
public void | beginEarlyTransaction()PUBLIC:
Tell the unit of work to begin a transaction now.
By default the unit of work will begin a transaction at commit time.
The default is the recommended approach, however sometimes it is
neccessary to start the transaction before commit time. When the
unit of work commits, this transcation will be commited.
beginTransaction();
setWasTransactionBegunPrematurely(true);
|
public void | beginTransaction()INTERNAL:
This is internal to the uow, transactions should not be used explictly in a uow.
The uow shares its parents transactions.
getParent().beginTransaction();
|
public java.lang.Object | buildOriginal(java.lang.Object workingClone)INTERNAL:
Unregistered new objects have no original so we must create one for commit and resume and
to put into the parent. We can NEVER let the same copy of an object exist in multiple units of work.
ClassDescriptor descriptor = getDescriptor(workingClone);
ObjectBuilder builder = descriptor.getObjectBuilder();
Object original = builder.instantiateClone(workingClone, this);
// If no original exists can mean any of the following:
// -A RemoteUnitOfWork and cloneToOriginals is transient.
// -A clone read while in transaction, and built directly from
// the database row with no intermediary original.
// -An unregistered new object
if (checkIfAlreadyRegistered(workingClone, descriptor) != null) {
getCloneToOriginals().put(workingClone, original);
return original;
} else {
// Assume it is an unregisteredNewObject, but this is worrisome, as
// it may be an unregistered existing object, not in the parent cache?
Object backup = builder.instantiateClone(workingClone, this);
// Original is fine for backup as state is the same.
getCloneMapping().put(workingClone, backup);
// Must register new instance / clone as the original.
getNewObjectsCloneToOriginal().put(workingClone, original);
getNewObjectsOriginalToClone().put(original, workingClone);
// no need to register in identity map as the DatabaseQueryMechanism will have
//placed the object in the identity map on insert. bug 3431586
}
return original;
|
public oracle.toplink.essentials.internal.sessions.UnitOfWorkChangeSet | calculateChanges(oracle.toplink.essentials.internal.helper.IdentityHashtable allObjects, oracle.toplink.essentials.internal.sessions.UnitOfWorkChangeSet changeSet)INTERNAL:
This Method is designed to calculate the changes for all objects
within the PendingObjects.
getEventManager().preCalculateUnitOfWorkChangeSet();
Enumeration objects = allObjects.elements();
while (objects.hasMoreElements()) {
Object object = objects.nextElement();
//block of code removed because it will never be touched see bug # 2903565
ClassDescriptor descriptor = getDescriptor(object);
//Block of code removed for code coverage, as it would never have been touched. bug # 2903600
// Use the object change policy to determine if we should run a comparison for this object - TGW
if (descriptor.getObjectChangePolicy().shouldCompareForChange(object, this, descriptor)) {
ObjectChangeSet changes = descriptor.getObjectChangePolicy().calculateChanges(object, getBackupClone(object), changeSet, this, descriptor, true);
if ((changes != null) && changes.isNew()) {
// add it to the new list as well so we do not loose it as it may not have a valid primary key
// it will be moved to the standard list once it is inserted.
changeSet.addNewObjectChangeSet(changes, this);
} else {
changeSet.addObjectChangeSet(changes);
}
}
}
getEventManager().postCalculateUnitOfWorkChangeSet(changeSet);
return changeSet;
|
protected boolean | canChangeReadOnlySet()INTERNAL:
Checks whether the receiver has been used. i.e. objects have been registered.
return !hasCloneMapping() && !hasDeletedObjects();
|
public java.lang.Object | checkExistence(java.lang.Object object)INTERNAL:
Register the object and return the clone if it is existing otherwise return null if it is new.
The unit of work determines existence during registration, not during the commit.
ClassDescriptor descriptor = getDescriptor(object.getClass());
Vector primaryKey = descriptor.getObjectBuilder().extractPrimaryKeyFromObject(object, this);
// PERF: null primary key cannot exist.
if (primaryKey.contains(null)) {
return null;
}
DoesExistQuery existQuery = descriptor.getQueryManager().getDoesExistQuery();
existQuery = (DoesExistQuery)existQuery.clone();
existQuery.setObject(object);
existQuery.setPrimaryKey(primaryKey);
existQuery.setDescriptor(descriptor);
existQuery.setCheckCacheFirst(true);
if (((Boolean)executeQuery(existQuery)).booleanValue()) {
//we know if it exists or not, now find or register it
Object objectFromCache = getIdentityMapAccessorInstance().getFromIdentityMap(primaryKey, object.getClass(), descriptor, null);
if (objectFromCache != null) {
// Ensure that the registered object is the one from the parent cache.
if (shouldPerformFullValidation()) {
if ((objectFromCache != object) && (getParent().getIdentityMapAccessorInstance().getFromIdentityMap(primaryKey, object.getClass(), descriptor, null) != object)) {
throw ValidationException.wrongObjectRegistered(object, objectFromCache);
}
}
// Has already been cloned.
if (!this.isObjectDeleted(objectFromCache))
return objectFromCache;
}
// This is a case where the object is not in the session cache,
// so a new cache-key is used as there is no original to use for locking.
return cloneAndRegisterObject(object, new CacheKey(primaryKey), null);
} else {
return null;
}
|
public boolean | checkForUnregisteredExistingObject(java.lang.Object object)INTERNAL:
ClassDescriptor descriptor = getDescriptor(object.getClass());
Vector primaryKey = descriptor.getObjectBuilder().extractPrimaryKeyFromObject(object, this);
DoesExistQuery existQuery = descriptor.getQueryManager().getDoesExistQuery();
existQuery = (DoesExistQuery)existQuery.clone();
existQuery.setObject(object);
existQuery.setPrimaryKey(primaryKey);
existQuery.setDescriptor(descriptor);
existQuery.setCheckCacheFirst(true);
if (((Boolean)executeQuery(existQuery)).booleanValue()) {
return true;
} else {
return false;
}
|
protected java.lang.Object | checkIfAlreadyRegistered(java.lang.Object object, oracle.toplink.essentials.descriptors.ClassDescriptor descriptor)INTERNAL:
Return the value of the object if it already is registered, otherwise null.
// Don't register read-only classes
if (isClassReadOnly(object.getClass(), descriptor)) {
return null;
}
// Check if the working copy is again being registered in which case we return the same working copy
Object registeredObject = getCloneMapping().get(object);
if (registeredObject != null) {
return object;
}
// Check if object exists in my new objects if it is in the new objects cache then it means domain object is being
// re-registered and we should return the same working clone. This check holds only for the new registered objects
// PERF: Avoid initialization of new objects if none.
if (hasNewObjects()) {
registeredObject = getNewObjectsOriginalToClone().get(object);
if (registeredObject != null) {
return registeredObject;
}
}
return null;
|
public void | clear(boolean shouldClearCache)INTERNAL:
This method will clear all registered objects from this UnitOfWork.
If parameter value is 'true' then the cache(s) are cleared, too.
this.cloneToOriginals = null;
this.cloneMapping = new IdentityHashtable();
this.newObjectsCloneToOriginal = null;
this.newObjectsOriginalToClone = null;
this.deletedObjects = null;
this.allClones = null;
this.objectsDeletedDuringCommit = null;
this.removedObjects = null;
this.unregisteredNewObjects = null;
this.unregisteredExistingObjects = null;
this.newAggregates = null;
this.unitOfWorkChangeSet = null;
this.pessimisticLockedObjects = null;
this.optimisticReadLockObjects = null;
if(shouldClearCache) {
this.getIdentityMapAccessor().initializeIdentityMaps();
if (this.getParent() instanceof IsolatedClientSession) {
this.getParent().getIdentityMapAccessor().initializeIdentityMaps();
}
}
|
public void | clearForClose(boolean shouldClearCache)INTERNAL:
Call this method if the uow will no longer used for comitting transactions:
all the changes sets will be dereferenced, and (optionally) the cache cleared.
If the uow is not released, but rather kept around for ValueHolders, then identity maps shouldn't be cleared:
the parameter value should be 'false'. The lifecycle set to Birth so that uow ValueHolder still could be used.
Alternatively, if called from release method then everything should go and therefore parameter value should be 'true'.
In this case lifecycle won't change - uow.release (optionally) calls this method when it (uow) is already dead.
The reason for calling this method from release is to free maximum memory right away:
the uow might still be referenced by objects using UOWValueHolders (though they shouldn't be around
they still might).
clear(shouldClearCache);
if(isActive()) {
//Reset lifecycle
this.lifecycle = Birth;
this.isSynchronized = false;
}
|
protected java.lang.Object | cloneAndRegisterNewObject(java.lang.Object original)ADVANCED:
Register the new object with the unit of work.
This will register the new object with cloning.
Normally the registerObject method should be used for all registration of new and existing objects.
This version of the register method can only be used for new objects.
This method should only be used if a new object is desired to be registered without an existence Check.
ClassDescriptor descriptor = getDescriptor(original);
ObjectBuilder builder = descriptor.getObjectBuilder();
// bug 2612602 create the working copy object.
Object clone = builder.instantiateWorkingCopyClone(original, this);
// Must put in the original to clone to resolv circular refs.
getNewObjectsOriginalToClone().put(original, clone);
// Must put in clone mapping.
getCloneMapping().put(clone, clone);
builder.populateAttributesForClone(original, clone, this, null);
// Must reregister in both new objects.
registerNewObjectClone(clone, original, descriptor);
//Build backup clone for DeferredChangeDetectionPolicy or ObjectChangeTrackingPolicy,
//but not for AttributeChangeTrackingPolicy
Object backupClone = descriptor.getObjectChangePolicy().buildBackupClone(clone, builder, this);
getCloneMapping().put(clone, backupClone);// The backup clone must be updated.
return clone;
|
public java.lang.Object | cloneAndRegisterObject(java.lang.Object original, oracle.toplink.essentials.internal.identitymaps.CacheKey cacheKey, oracle.toplink.essentials.internal.queryframework.JoinedAttributeManager joinedAttributeManager)INTERNAL:
Clone and register the object.
The cache key must the cache key from the session cache,
as it will be used for locking.
ClassDescriptor descriptor = getDescriptor(original);
ObjectBuilder builder = descriptor.getObjectBuilder();
Object workingClone = builder.instantiateWorkingCopyClone(original, this);
// The cache/objects being registered must first be locked to ensure
// that a merge or refresh does not oocur on the object while being cloned to
// avoid cloning a partially merged/refreshed object.
// If a cache isolation level is used, then lock the entire cache.
// otherwise lock the object and it related objects (not using indirection) as a unit.
// If just a simple object (all indirection) a simple read-lock can be used.
// PERF: Cache if check to write is required.
boolean identityMapLocked = this.shouldCheckWriteLock && getParent().getIdentityMapAccessorInstance().acquireWriteLock();
boolean rootOfCloneRecursion = false;
if ((!identityMapLocked) && (this.objectsLockedForClone == null)) {//we may have locked all required objects already
// PERF: If a simple object just acquire a simple read-lock.
if (descriptor.shouldAcquireCascadedLocks()) {
this.objectsLockedForClone = getParent().getIdentityMapAccessorInstance().getWriteLockManager().acquireLocksForClone(original, descriptor, cacheKey.getKey(), getParent());
} else {
cacheKey.acquireReadLock();
}
rootOfCloneRecursion = true;
}
try {
// This must be registered before it is built to avoid really obscure cycles.
getCloneMapping().put(workingClone, workingClone);
//also clone the fetch group reference if applied
if (descriptor.hasFetchGroupManager()) {
descriptor.getFetchGroupManager().copyFetchGroupInto(original, workingClone);
}
//store this for look up later
getCloneToOriginals().put(workingClone, original);
// just clone it.
populateAndRegisterObject(original, workingClone, cacheKey.getKey(), descriptor, cacheKey.getWriteLockValue(), cacheKey.getReadTime(), joinedAttributeManager);
} finally {
// If the entire cache was locke, release the cache lock,
// otherwise either release the cache-key for a simple lock,
// otherwise release the entire set of locks for related objects if this was the root.
if (identityMapLocked) {
getParent().getIdentityMapAccessorInstance().releaseWriteLock();
} else {
if (rootOfCloneRecursion) {
if (this.objectsLockedForClone == null) {
cacheKey.releaseReadLock();
} else {
for (Iterator iterator = this.objectsLockedForClone.values().iterator();
iterator.hasNext();) {
((CacheKey)iterator.next()).releaseReadLock();
}
this.objectsLockedForClone = null;
}
}
}
}
return workingClone;
|
public oracle.toplink.essentials.internal.helper.IdentityHashtable | collectAndPrepareObjectsForCommit()INTERNAL:
Prepare for commit.
IdentityHashtable changedObjects = new IdentityHashtable(1 + getCloneMapping().size());
// SPECJ: Avoid for CMP.
if (! getProject().isPureCMP2Project()) {
assignSequenceNumbers();
}
//assignSequenceNumbers will collect the unregistered new objects and assign id's to all new
// objects
// Add any registered objects.
for (Enumeration clonesEnum = getCloneMapping().keys(); clonesEnum.hasMoreElements();) {
Object clone = clonesEnum.nextElement();
changedObjects.put(clone, clone);
}
for (Enumeration unregisteredNewObjectsEnum = getUnregisteredNewObjects().keys();
unregisteredNewObjectsEnum.hasMoreElements();) {
Object newObject = unregisteredNewObjectsEnum.nextElement();
changedObjects.put(newObject, newObject);
}
return changedObjects;
|
public oracle.toplink.essentials.internal.helper.IdentityHashtable | collectAndPrepareObjectsForNestedMerge()INTERNAL:
Prepare for merge in nested uow.
IdentityHashtable changedObjects = new IdentityHashtable(1 + getCloneMapping().size());
discoverAllUnregisteredNewObjects();
//assignSequenceNumbers will collect the unregistered new objects and assign id's to all new
// objects
// Add any registered objects.
for (Enumeration clonesEnum = getCloneMapping().keys(); clonesEnum.hasMoreElements();) {
Object clone = clonesEnum.nextElement();
changedObjects.put(clone, clone);
}
for (Enumeration unregisteredNewObjectsEnum = getUnregisteredNewObjects().keys();
unregisteredNewObjectsEnum.hasMoreElements();) {
Object newObject = unregisteredNewObjectsEnum.nextElement();
changedObjects.put(newObject, newObject);
}
return changedObjects;
|
public void | commit()PUBLIC:
Commit the unit of work to its parent.
For a nested unit of work this will merge any changes to its objects
with its parents.
For a first level unit of work it will commit all changes to its objects
to the database as a single transaction. If successful the changes to its
objects will be merged to its parent's objects. If the commit fails the database
transaction will be rolledback, and the unit of work will be released.
If the commit is successful the unit of work is released, and a new unit of work
must be acquired if further changes are desired.
//CR#2189 throwing exception if UOW try to commit again(XC)
if (!isActive()) {
throw ValidationException.cannotCommitUOWAgain();
}
if (isAfterWriteChangesFailed()) {
throw ValidationException.unitOfWorkAfterWriteChangesFailed("commit");
}
if (!isNestedUnitOfWork()) {
if (isSynchronized()) {
// If we started the JTS transaction then we have to commit it as well.
if (getParent().wasJTSTransactionInternallyStarted()) {
commitInternallyStartedExternalTransaction();
}
// Do not commit until the JTS wants to.
return;
}
}
if (getLifecycle() == CommitTransactionPending) {
commitAfterWriteChanges();
return;
}
log(SessionLog.FINER, SessionLog.TRANSACTION, "begin_unit_of_work_commit");// bjv - correct spelling
getEventManager().preCommitUnitOfWork();
setLifecycle(CommitPending);
commitRootUnitOfWork();
getEventManager().postCommitUnitOfWork();
log(SessionLog.FINER, SessionLog.TRANSACTION, "end_unit_of_work_commit");
release();
|
protected void | commitAfterWriteChanges()INTERNAL:
Commits a UnitOfWork where the commit process has already been
initiated by all call to writeChanges().
a.k.a finalizeCommit()
commitTransactionAfterWriteChanges();
mergeClonesAfterCompletion();
setDead();
release();
|
public void | commitAndResume()PUBLIC:
Commit the unit of work to its parent.
For a nested unit of work this will merge any changes to its objects
with its parents.
For a first level unit of work it will commit all changes to its objects
to the database as a single transaction. If successful the changes to its
objects will be merged to its parent's objects. If the commit fails the database
transaction will be rolledback, and the unit of work will be released.
The normal commit releases the unit of work, forcing a new one to be acquired if further changes are desired.
The resuming feature allows for the same unit of work (and working copies) to be continued to be used.
//CR#2189 throwing exception if UOW try to commit again(XC)
if (!isActive()) {
throw ValidationException.cannotCommitUOWAgain();
}
if (isAfterWriteChangesFailed()) {
throw ValidationException.unitOfWorkAfterWriteChangesFailed("commit");
}
if (!isNestedUnitOfWork()) {
if (isSynchronized()) {
// JTA synchronized units of work, cannot be resumed as there is no
// JTA transaction to register with after the commit,
// technically this could be supported if the uow started the transaction,
// but currently the after completion releases the uow and client session so not really possible.
throw ValidationException.cannotCommitAndResumeSynchronizedUOW(this);
}
}
if (getLifecycle() == CommitTransactionPending) {
commitAndResumeAfterWriteChanges();
return;
}
log(SessionLog.FINER, SessionLog.TRANSACTION, "begin_unit_of_work_commit");// bjv - correct spelling
getEventManager().preCommitUnitOfWork();
setLifecycle(CommitPending);
commitRootUnitOfWork();
getEventManager().postCommitUnitOfWork();
log(SessionLog.FINER, SessionLog.TRANSACTION, "end_unit_of_work_commit");
log(SessionLog.FINER, SessionLog.TRANSACTION, "resume_unit_of_work");
synchronizeAndResume();
getEventManager().postResumeUnitOfWork();
|
protected void | commitAndResumeAfterWriteChanges()INTERNAL:
Commits and resumes a UnitOfWork where the commit process has already been
initiated by all call to writeChanges().
a.k.a finalizeCommit()
commitTransactionAfterWriteChanges();
mergeClonesAfterCompletion();
log(SessionLog.FINER, SessionLog.TRANSACTION, "resume_unit_of_work");
synchronizeAndResume();
getEventManager().postResumeUnitOfWork();
|
public void | commitAndResumeOnFailure()PUBLIC:
Commit the unit of work to its parent.
For a nested unit of work this will merge any changes to its objects
with its parents.
For a first level unit of work it will commit all changes to its objects
to the database as a single transaction. If successful the changes to its
objects will be merged to its parent's objects. If the commit fails the database
transaction will be rolledback, but the unit of work will remain active.
It can then be retried or released.
The normal commit failure releases the unit of work, forcing a new one to be acquired if further changes are desired.
The resuming feature allows for the same unit of work (and working copies) to be continued to be used if an error occurs.
// First clone the identity map, on failure replace the clone back as the cache.
IdentityMapManager failureManager = (IdentityMapManager)getIdentityMapAccessorInstance().getIdentityMapManager().clone();
try {
// Call commitAndResume.
// Oct 13, 2000 - JED PRS #13551
// This method will always resume now. Calling commitAndResume will sync the cache
// if successful. This method will take care of resuming if a failure occurs
commitAndResume();
} catch (RuntimeException exception) {
//reset unitOfWorkChangeSet. Needed for ObjectChangeTrackingPolicy and DeferredChangeDetectionPolicy
setUnitOfWorkChangeSet(null);
getIdentityMapAccessorInstance().setIdentityMapManager(failureManager);
log(SessionLog.FINER, SessionLog.TRANSACTION, "resuming_unit_of_work_from_failure");
throw exception;
}
|
public void | commitAndResumeWithPreBuiltChangeSet(oracle.toplink.essentials.internal.sessions.UnitOfWorkChangeSet uowChangeSet)INTERNAL:
This method is used by the MappingWorkbench for their read-only file feature
this method must not be exposed to or used by customers until it has been revised
and the feature revisited to support OptimisticLocking and Serialization
if (!isNestedUnitOfWork()) {
if (isSynchronized()) {
// If we started the JTS transaction then we have to commit it as well.
if (getParent().wasJTSTransactionInternallyStarted()) {
commitInternallyStartedExternalTransaction();
}
// Do not commit until the JTS wants to.
return;
}
}
log(SessionLog.FINER, SessionLog.TRANSACTION, "begin_unit_of_work_commit");// bjv - correct spelling
getEventManager().preCommitUnitOfWork();
setLifecycle(CommitPending);
commitRootUnitOfWorkWithPreBuiltChangeSet(uowChangeSet);
getEventManager().postCommitUnitOfWork();
log(SessionLog.FINER, SessionLog.TRANSACTION, "end_unit_of_work_commit");
log(SessionLog.FINER, SessionLog.TRANSACTION, "resume_unit_of_work");
synchronizeAndResume();
getEventManager().postResumeUnitOfWork();
|
protected boolean | commitInternallyStartedExternalTransaction()PROTECTED:
Used in commit and commit-like methods to commit
internally started external transaction
boolean committed = false;
if (!getParent().isInTransaction() || (wasTransactionBegunPrematurely() && (getParent().getTransactionMutex().getDepth() == 1))) {
committed = getParent().commitExternalTransaction();
}
return committed;
|
public void | commitRootUnitOfWork()INTERNAL:
Commit the changes to any objects to the parent.
commitToDatabaseWithChangeSet(true);
// Merge after commit
mergeChangesIntoParent();
|
public void | commitRootUnitOfWorkWithPreBuiltChangeSet(oracle.toplink.essentials.internal.sessions.UnitOfWorkChangeSet uowChangeSet)INTERNAL:
This method is used by the MappingWorkbench read-only files feature
It will commit a pre-built unitofwork change set to the database
//new code no need to check old commit
commitToDatabaseWithPreBuiltChangeSet(uowChangeSet, true);
// Merge after commit
mergeChangesIntoParent();
|
protected void | commitToDatabase(boolean commitTransaction)INTERNAL:
CommitChanges To The Database from a calculated changeSet
try {
//CR4202 - ported from 3.6.4
if (wasTransactionBegunPrematurely()) {
// beginTransaction() has been already called
setWasTransactionBegunPrematurely(false);
} else {
beginTransaction();
}
if(commitTransaction) {
setWasNonObjectLevelModifyQueryExecuted(false);
}
Vector deletedObjects = null;// PERF: Avoid deletion if nothing to delete.
if (hasDeletedObjects()) {
deletedObjects = new Vector(getDeletedObjects().size());
for (Enumeration objects = getDeletedObjects().keys(); objects.hasMoreElements();) {
deletedObjects.addElement(objects.nextElement());
}
}
if (shouldPerformDeletesFirst) {
if (hasDeletedObjects()) {
// This must go to the commit manager because uow overrides to do normal deletion.
getCommitManager().deleteAllObjects(deletedObjects);
// Clear change sets of the deleted object to avoid redundant updates.
for (Enumeration objects = getObjectsDeletedDuringCommit().keys();
objects.hasMoreElements();) {
oracle.toplink.essentials.internal.sessions.ObjectChangeSet objectChangeSet = (oracle.toplink.essentials.internal.sessions.ObjectChangeSet)this.unitOfWorkChangeSet.getObjectChangeSetForClone(objects.nextElement());
if (objectChangeSet != null) {
objectChangeSet.clear();
}
}
}
// Let the commit manager figure out how to write the objects
super.writeAllObjectsWithChangeSet(this.unitOfWorkChangeSet);
// Issue all the SQL for the ModifyAllQuery's, don't touch the cache though
issueModifyAllQueryList();
} else {
// Let the commit manager figure out how to write the objects
super.writeAllObjectsWithChangeSet(this.unitOfWorkChangeSet);
if (hasDeletedObjects()) {
// This must go to the commit manager because uow overrides to do normal deletion.
getCommitManager().deleteAllObjects(deletedObjects);
}
// Issue all the SQL for the ModifyAllQuery's, don't touch the cache though
issueModifyAllQueryList();
}
// Issue prepare event.
getEventManager().prepareUnitOfWork();
// writeChanges() does everything but this step.
// do not lock objects unless we are at the commit s
if (commitTransaction) {
try{
// if we should be acquiring locks before commit let's do that here
if (getDatasourceLogin().shouldSynchronizeObjectLevelReadWriteDatabase()){
setMergeManager(new MergeManager(this));
//If we are merging into the shared cache acquire all required locks before merging.
getParent().getIdentityMapAccessorInstance().getWriteLockManager().acquireRequiredLocks(getMergeManager(), (UnitOfWorkChangeSet)getUnitOfWorkChangeSet());
}
commitTransaction();
}catch (RuntimeException throwable){
if (getDatasourceLogin().shouldSynchronizeObjectLevelReadWriteDatabase() && (getMergeManager() != null)) {
// exception occurred durring the commit.
getParent().getIdentityMapAccessorInstance().getWriteLockManager().releaseAllAcquiredLocks(getMergeManager());
this.setMergeManager(null);
}
throw throwable;
}catch (Error throwable){
if (getDatasourceLogin().shouldSynchronizeObjectLevelReadWriteDatabase() && (getMergeManager() != null)) {
// exception occurred durring the commit.
getParent().getIdentityMapAccessorInstance().getWriteLockManager().releaseAllAcquiredLocks(getMergeManager());
this.setMergeManager(null);
}
throw throwable;
}
}else{
setWasTransactionBegunPrematurely(true);
}
} catch (RuntimeException exception) {
rollbackTransaction(commitTransaction);
if (hasExceptionHandler()) {
getExceptionHandler().handleException(exception);
} else {
throw exception;
}
}
|
protected void | commitToDatabaseWithChangeSet(boolean commitTransaction)INTERNAL:
Commit the changes to any objects to the parent.
try {
startOperationProfile(SessionProfiler.UowCommit);
// The sequence numbers are assigned outside of the commit transaction.
// This improves concurrency, avoids deadlock and in the case of three-tier will
// not leave invalid cached sequences on rollback.
// Also must first set the commit manager active.
getCommitManager().setIsActive(true);
// This will assgin sequence numbers.
IdentityHashtable allObjects = collectAndPrepareObjectsForCommit();
// Must clone because the commitManager will remove the objects from the collection
// as the objects are written to the database.
setAllClonesCollection((IdentityHashtable)allObjects.clone());
// Iterate over each clone and let the object build merge to clones into the originals.
// The change set may already exist if using change tracking.
if (getUnitOfWorkChangeSet() == null) {
setUnitOfWorkChangeSet(new UnitOfWorkChangeSet());
}
calculateChanges(getAllClones(), (UnitOfWorkChangeSet)getUnitOfWorkChangeSet());
// Bug 2834266 only commit to the database if changes were made, avoid begin/commit of transaction
if (hasModifications()) {
commitToDatabase(commitTransaction);
} else {
// CR#... need to commit the transaction if begun early.
if (wasTransactionBegunPrematurely()) {
if (commitTransaction) {
// Must be set to false for release to know not to rollback.
setWasTransactionBegunPrematurely(false);
setWasNonObjectLevelModifyQueryExecuted(false);
commitTransaction();
}
}
getCommitManager().setIsActive(false);
}
endOperationProfile(SessionProfiler.UowCommit);
} catch (RuntimeException exception) {
handleException((RuntimeException)exception);
}
|
protected void | commitToDatabaseWithPreBuiltChangeSet(oracle.toplink.essentials.internal.sessions.UnitOfWorkChangeSet uowChangeSet, boolean commitTransaction)INTERNAL:
Commit pre-built changeSet to the database changest to the database.
try {
// The sequence numbers are assigned outside of the commit transaction.
// This improves concurrency, avoids deadlock and in the case of three-tier will
// not leave invalid cached sequences on rollback.
// Also must first set the commit manager active.
getCommitManager().setIsActive(true);
//Set empty collection in allClones for merge.
setAllClonesCollection(new IdentityHashtable());
// Iterate over each clone and let the object build merge to clones into the originals.
setUnitOfWorkChangeSet(uowChangeSet);
commitToDatabase(commitTransaction);
} catch (RuntimeException exception) {
handleException((RuntimeException)exception);
}
|
public void | commitTransaction()INTERNAL:
This is internal to the uow, transactions should not be used explictly in a uow.
The uow shares its parents transactions.
getParent().commitTransaction();
|
protected void | commitTransactionAfterWriteChanges()INTERNAL:
After writeChanges() everything has been done except for committing
the transaction. This allows that execution path to 'catch up'.
setWasNonObjectLevelModifyQueryExecuted(false);
if (hasModifications() || wasTransactionBegunPrematurely()) {
try{
//gf934: ensuring release doesn't cause an extra rollback call if acquireRequiredLocks throws an exception
setWasTransactionBegunPrematurely(false);
// if we should be acquiring locks before commit let's do that here
if (getDatasourceLogin().shouldSynchronizeObjectLevelReadWriteDatabase() && (getUnitOfWorkChangeSet() != null)) {
setMergeManager(new MergeManager(this));
//If we are merging into the shared cache acquire all required locks before merging.
getParent().getIdentityMapAccessorInstance().getWriteLockManager().acquireRequiredLocks(getMergeManager(), (UnitOfWorkChangeSet)getUnitOfWorkChangeSet());
}
commitTransaction();
}catch (RuntimeException exception){
if (getDatasourceLogin().shouldSynchronizeObjectLevelReadWriteDatabase() && (getMergeManager() != null)) {
// exception occurred durring the commit.
getParent().getIdentityMapAccessorInstance().getWriteLockManager().releaseAllAcquiredLocks(getMergeManager());
this.setMergeManager(null);
}
rollbackTransaction();
release();
handleException(exception);
}catch (Error throwable){
if (getDatasourceLogin().shouldSynchronizeObjectLevelReadWriteDatabase() && (getMergeManager() != null)) {
// exception occurred durring the commit.
getParent().getIdentityMapAccessorInstance().getWriteLockManager().releaseAllAcquiredLocks(getMergeManager());
this.setMergeManager(null);
}
throw throwable;
}
}
|
public java.util.Vector | copyReadOnlyClasses()INTERNAL:
Copy the read only classes from the unit of work.
return Helper.buildVectorFromHashtableElements(getReadOnlyClasses());
|
public java.lang.Object | deepMergeClone(java.lang.Object rmiClone)PUBLIC:
Merge the attributes of the clone into the unit of work copy.
This can be used for objects that are returned from the client through
RMI serialization or other serialization mechanisms, because the RMI object will
be a clone this will merge its attributes correctly to preserve object identity
within the unit of work and record its changes.
Everything connected to this object (i.e. the entire object tree where rmiClone
is the root) is also merged.
return mergeClone(rmiClone, MergeManager.CASCADE_ALL_PARTS);
|
public java.lang.Object | deepRevertObject(java.lang.Object clone)PUBLIC:
Revert the object's attributes from the parent.
This reverts everything the object references.
return revertObject(clone, MergeManager.CASCADE_ALL_PARTS);
|
public void | deepUnregisterObject(java.lang.Object clone)ADVANCED:
Unregister the object with the unit of work.
This can be used to delete an object that was just created and is not yet persistent.
Delete object can also be used, but will result in inserting the object and then deleting it.
The method should be used carefully because it will delete all the reachable parts.
unregisterObject(clone, DescriptorIterator.CascadeAllParts);
|
public void | deleteAllObjects(java.util.Vector domainObjects)PUBLIC:
Delete all of the objects and all of their privately owned parts in the database.
Delete operations are delayed in a unit of work until commit.
// This must be overriden to avoid dispatching to the commit manager.
for (Enumeration objectsEnum = domainObjects.elements(); objectsEnum.hasMoreElements();) {
deleteObject(objectsEnum.nextElement());
}
|
protected void | discoverAllUnregisteredNewObjects()INTERNAL:
Search for any objects in the parent that have not been registered.
These are required so that the nested unit of work does not add them to the parent
clone mapping on commit, causing possible incorrect insertions if they are dereferenced.
// 2612538 - the default size of IdentityHashtable (32) is appropriate
IdentityHashtable visitedNodes = new IdentityHashtable();
IdentityHashtable newObjects = new IdentityHashtable();
IdentityHashtable existingObjects = new IdentityHashtable();
// Iterate over each clone.
for (Enumeration clonesEnum = getCloneMapping().keys(); clonesEnum.hasMoreElements();) {
Object clone = clonesEnum.nextElement();
discoverUnregisteredNewObjects(clone, newObjects, existingObjects, visitedNodes);
}
setUnregisteredNewObjects(newObjects);
setUnregisteredExistingObjects(existingObjects);
|
public void | discoverUnregisteredNewObjects(java.lang.Object clone, oracle.toplink.essentials.internal.helper.IdentityHashtable knownNewObjects, oracle.toplink.essentials.internal.helper.IdentityHashtable unregisteredExistingObjects, oracle.toplink.essentials.internal.helper.IdentityHashtable visitedObjects)INTERNAL:
Traverse the object to find references to objects not registered in this unit of work.
// This define an inner class for process the itteration operation, don't be scared, its just an inner class.
DescriptorIterator iterator = new DescriptorIterator() {
public void iterate(Object object) {
// If the object is read-only the do not continue the traversal.
if (isClassReadOnly(object.getClass(), this.getCurrentDescriptor())) {
this.setShouldBreak(true);
return;
}
/* CR3440: Steven Vo
* Include the case that object is original then do nothing
*/
if (isSmartMerge() && isOriginalNewObject(object)) {
return;
} else if (!isObjectRegistered(object)) {// Don't need to check for aggregates, as iterator does not iterate on them by default.
if ((shouldPerformNoValidation()) && (checkForUnregisteredExistingObject(object))) {
// If no validation is performed and the object exists we need
// To keep a record of this object to ignore it, also I need to
// Stop iterating over it.
((IdentityHashtable)getUnregisteredExistingObjects()).put(object, object);
this.setShouldBreak(true);
return;
}
// This means it is a unregistered new object
((IdentityHashtable)getResult()).put(object, object);
}
}
};
//set the collection in the UnitofWork to be this list
setUnregisteredExistingObjects(unregisteredExistingObjects);
iterator.setVisitedObjects(visitedObjects);
iterator.setResult(knownNewObjects);
iterator.setSession(this);
// When using wrapper policy in EJB the iteration should stop on beans,
// this is because EJB forces beans to be registered anyway and clone identity can be violated
// and the violated clones references to session objects should not be traversed.
iterator.setShouldIterateOverWrappedObjects(false);
iterator.startIterationOn(clone);
|
public void | dontPerformValidation()ADVANCED:
The unit of work performs validations such as,
ensuring multiple copies of the same object don't exist in the same unit of work,
ensuring deleted objects are not refered after commit,
ensures that objects from the parent cache are not refered in the unit of work cache.
The level of validation can be increased or decreased for debugging purposes or under
advanced situation where the application requires/desires to violate clone identity in the unit of work.
It is strongly suggested that clone identity not be violate in the unit of work.
setValidationLevel(None);
|
public java.lang.Object | executeCall(oracle.toplink.essentials.queryframework.Call call, oracle.toplink.essentials.internal.sessions.AbstractRecord translationRow, oracle.toplink.essentials.queryframework.DatabaseQuery query)INTERNAL:
Override From session. Get the accessor based on the query, and execute call,
this is here for session broker.
Accessor accessor;
if (query.getSessionName() == null) {
accessor = query.getSession().getAccessor(query.getReferenceClass());
} else {
accessor = query.getSession().getAccessor(query.getSessionName());
}
query.setAccessor(accessor);
try {
return query.getAccessor().executeCall(call, translationRow, this);
} finally {
if (call.isFinished()) {
query.setAccessor(null);
}
}
|
public void | forceUpdateToVersionField(java.lang.Object lockObject, boolean shouldModifyVersionField)ADVANCED:
Set optmistic read lock on the object. This feature is overide by normal optimistic lock.
when the object is changed in UnitOfWork. The cloneFromUOW must be the clone of from this
UnitOfWork and it must implements version locking or timestamp locking.
The SQL would look like the followings.
If shouldModifyVersionField is true,
"UPDATE EMPLOYEE SET VERSION = 2 WHERE EMP_ID = 9 AND VERSION = 1"
If shouldModifyVersionField is false,
"UPDATE EMPLOYEE SET VERSION = 1 WHERE EMP_ID = 9 AND VERSION = 1"
ClassDescriptor descriptor = getDescriptor(lockObject);
if (descriptor == null) {
throw DescriptorException.missingDescriptor(lockObject.getClass().toString());
}
getOptimisticReadLockObjects().put(descriptor.getObjectBuilder().unwrapObject(lockObject, this), new Boolean(shouldModifyVersionField));
|
public oracle.toplink.essentials.internal.databaseaccess.Accessor | getAccessor()INTERNAL:
The uow does not store a local accessor but shares its parents.
return getParent().getAccessor();
|
public oracle.toplink.essentials.internal.databaseaccess.Accessor | getAccessor(java.lang.Class domainClass)INTERNAL:
The uow does not store a local accessor but shares its parents.
return getParent().getAccessor(domainClass);
|
public oracle.toplink.essentials.internal.databaseaccess.Accessor | getAccessor(java.lang.String sessionName)INTERNAL:
The uow does not store a local accessor but shares its parents.
return getParent().getAccessor(sessionName);
|
public oracle.toplink.essentials.sessions.UnitOfWork | getActiveUnitOfWork()PUBLIC:
Return the active unit of work for the current active external (JTS) transaction.
This should only be used with JTS and will return null if no external transaction exists.
/* Steven Vo: CR# 2517
This fixed the problem of returning null when this method is called on a UOW.
UOW does not copy the parent session's external transaction controller
when it is acquired but session does */
return getParent().getActiveUnitOfWork();
|
protected oracle.toplink.essentials.internal.helper.IdentityHashtable | getAllClones()INTERNAL:
This method is used to get a copy of the collection of all clones in the UnitOfWork
return this.allClones;
|
public java.util.Vector | getAllFromNewObjects(oracle.toplink.essentials.expressions.Expression selectionCriteria, java.lang.Class theClass, oracle.toplink.essentials.internal.sessions.AbstractRecord translationRow, oracle.toplink.essentials.queryframework.InMemoryQueryIndirectionPolicy valueHolderPolicy)INTERNAL:
Return any new objects matching the expression.
Used for in-memory querying.
// If new object are in the cache then they will have already been queried.
if (shouldNewObjectsBeCached()) {
return new Vector(1);
}
// PERF: Avoid initialization of new objects if none.
if (!hasNewObjects()) {
return new Vector(1);
}
Vector objects = new Vector();
for (Enumeration newObjectsEnum = getNewObjectsOriginalToClone().elements();
newObjectsEnum.hasMoreElements();) {
Object object = newObjectsEnum.nextElement();
if (theClass.isInstance(object)) {
if (selectionCriteria == null) {
objects.addElement(object);
} else if (selectionCriteria.doesConform(object, this, translationRow, valueHolderPolicy)) {
objects.addElement(object);
}
}
}
return objects;
|
public java.lang.Object | getBackupClone(java.lang.Object clone)INTERNAL:
Return the backup clone for the working clone.
Object backupClone = getCloneMapping().get(clone);
if (backupClone != null) {
return backupClone;
}
/* CR3440: Steven Vo
* Smart merge if neccessary in isObjectRegistered()
*/
if (isObjectRegistered(clone)) {
return getCloneMapping().get(clone);
} else {
ClassDescriptor descriptor = getDescriptor(clone);
Vector primaryKey = keyFromObject(clone, descriptor);
// This happens if clone was from the parent identity map.
if (getParent().getIdentityMapAccessorInstance().containsObjectInIdentityMap(primaryKey, clone.getClass(), descriptor)) {
//cr 3796
if ((getUnregisteredNewObjects().get(clone) != null) && isMergePending()) {
//Another thread has read the new object before it has had a chance to
//merge this object.
// It also means it is an unregistered new object, so create a new backup clone for it.
return descriptor.getObjectBuilder().buildNewInstance();
}
if (hasObjectsDeletedDuringCommit() && getObjectsDeletedDuringCommit().containsKey(clone)) {
throw QueryException.backupCloneIsDeleted(clone);
}
throw QueryException.backupCloneIsOriginalFromParent(clone);
}
// Also check that the object is not the original to a registered new object
// (the original should not be referenced if not smart merge, this is an error.
else if (hasNewObjects() && getNewObjectsOriginalToClone().containsKey(clone)) {
/* CR3440: Steven Vo
* Check case that clone is original
*/
if (isSmartMerge()) {
backupClone = getCloneMapping().get(getNewObjectsOriginalToClone().get(clone));
} else {
throw QueryException.backupCloneIsOriginalFromSelf(clone);
}
} else {
// This means it is an unregistered new object, so create a new backup clone for it.
backupClone = descriptor.getObjectBuilder().buildNewInstance();
}
}
return backupClone;
|
public java.lang.Object | getBackupCloneForCommit(java.lang.Object clone)INTERNAL:
Return the backup clone for the working clone.
Object backupClone = getBackupClone(clone);
/* CR3440: Steven Vo
* Build new instance only if it was not handled by getBackupClone()
*/
if (isCloneNewObject(clone)) {
return getDescriptor(clone).getObjectBuilder().buildNewInstance();
}
return backupClone;
|
public oracle.toplink.essentials.internal.helper.IdentityHashtable | getCloneMapping()INTERNAL:
Return the clone mapping.
The clone mapping contains clone of all registered objects,
this is required to store the original state of the objects when registered
so that only what is changed will be commited to the database and the parent,
(this is required to support parralel unit of work).
// PERF: lazy-init (3286089)
if (cloneMapping == null) {
// 2612538 - the default size of IdentityHashtable (32) is appropriate
cloneMapping = new IdentityHashtable();
}
return cloneMapping;
|
public oracle.toplink.essentials.internal.helper.IdentityHashtable | getCloneToOriginals()INTERNAL:
Hashtable used to avoid garbage collection in weak caches.
ALSO, hashtable used as lookup when originals used for merge when original in
identitymap can not be found. As in a CacheIdentityMap
//Helper.toDo("proper fix, collection merge can have objects disapear for original.");
if (cloneToOriginals == null) {// Must lazy initialize for remote.
// 2612538 - the default size of IdentityHashtable (32) is appropriate
cloneToOriginals = new IdentityHashtable();
}
return cloneToOriginals;
|
public oracle.toplink.essentials.internal.sessions.CommitManager | getCommitManager()INTERNAL:
The commit manager is used to resolve referncial integrity on commits of multiple objects.
The commit manage is lazy init from parent.
// PERF: lazy init, not always required for release/commit with no changes.
if (commitManager == null) {
commitManager = new CommitManager(this);
// Initialize the commit manager
commitManager.setCommitOrder(getParent().getCommitManager().getCommitOrder());
}
return commitManager;
|
public oracle.toplink.essentials.changesets.UnitOfWorkChangeSet | getCurrentChanges()ADVANCED:
This method Will Calculate the chages for the UnitOfWork. Without assigning sequence numbers
This is a Computationaly intensive operation and should be avoided unless necessary.
A valid changeSet, with sequencenumbers can be collected from the UnitOfWork After the commit
is complete by calling unitOfWork.getUnitOfWorkChangeSet()
IdentityHashtable allObjects = null;
allObjects = collectAndPrepareObjectsForNestedMerge();
return calculateChanges(allObjects, new UnitOfWorkChangeSet());
|
public java.util.Vector | getDefaultReadOnlyClasses()INTERNAL: Returns the set of read-only classes that gets assigned to each newly created UnitOfWork.
return getParent().getDefaultReadOnlyClasses();
|
public oracle.toplink.essentials.internal.helper.IdentityHashtable | getDeletedObjects()INTERNAL:
The deleted objects stores any objects removed during the unit of work.
On commit they will all be removed from the database.
if (deletedObjects == null) {
// 2612538 - the default size of IdentityHashtable (32) is appropriate
deletedObjects = new IdentityHashtable();
}
return deletedObjects;
|
public oracle.toplink.essentials.descriptors.ClassDescriptor | getDescriptorForAlias(java.lang.String alias)PUBLIC:
Return the descriptor for the alias.
UnitOfWork delegates this to the parent
Introduced because of Bug#2610803
return getParent().getDescriptorForAlias(alias);
|
public java.util.Map | getDescriptors()PUBLIC:
Return all registered descriptors.
The unit of work inherits its parent's descriptors. The each descriptor's Java Class
is used as the key in the Hashtable returned.
return getParent().getDescriptors();
|
public oracle.toplink.essentials.internal.sessions.AbstractSession | getExecutionSession(oracle.toplink.essentials.queryframework.DatabaseQuery query)INTERNAL:
Gets the session which this query will be executed on.
Generally will be called immediately before the call is translated,
which is immediately before session.executeCall.
Since the execution session also knows the correct datasource platform
to execute on, it is often used in the mappings where the platform is
needed for type conversion, or where calls are translated.
Is also the session with the accessor. Will return a ClientSession if
it is in transaction and has a write connection.
// This optimization is only for when executing with a ClientSession in
// transaction. In that case log with the UnitOfWork instead of the
// ClientSession.
// Note that if actually executing on ServerSession or a registered
// session of a broker, must execute on that session directly.
//bug 5201121 Always use the parent or execution session from the parent
// should never use the unit of work as it does not controll the
//accessors and with a sessioon broker it will not have the correct
//login info
return getParent().getExecutionSession(query);
|
public int | getLifecycle()INTERNAL:
The life cycle tracks if the unit of work is active and is used for JTS.
return lifecycle;
|
public oracle.toplink.essentials.internal.sessions.MergeManager | getMergeManager()A reference to the last used merge manager. This is used to track locked
objects.
return this.lastUsedMergeManager;
|
public oracle.toplink.essentials.internal.helper.IdentityHashtable | getNewAggregates()INTERNAL:
The hashtable stores any new aggregates that have been cloned.
if (this.newAggregates == null) {
// 2612538 - the default size of IdentityHashtable (32) is appropriate
this.newAggregates = new IdentityHashtable();
}
return newAggregates;
|
public synchronized oracle.toplink.essentials.internal.helper.IdentityHashtable | getNewObjectsCloneToOriginal()INTERNAL:
The new objects stores any objects newly created during the unit of work.
On commit they will all be inserted into the database.
if (newObjectsCloneToOriginal == null) {
// 2612538 - the default size of IdentityHashtable (32) is appropriate
newObjectsCloneToOriginal = new IdentityHashtable();
}
return newObjectsCloneToOriginal;
|
public synchronized oracle.toplink.essentials.internal.helper.IdentityHashtable | getNewObjectsOriginalToClone()INTERNAL:
The new objects stores any objects newly created during the unit of work.
On commit they will all be inserted into the database.
if (newObjectsOriginalToClone == null) {
// 2612538 - the default size of IdentityHashtable (32) is appropriate
newObjectsOriginalToClone = new IdentityHashtable();
}
return newObjectsOriginalToClone;
|
public java.lang.Object | getObjectFromNewObjects(java.lang.Class theClass, java.util.Vector selectionKey)INTERNAL:
Return any new object matching the expression.
Used for in-memory querying.
// PERF: Avoid initialization of new objects if none.
if (!hasNewObjects()) {
return null;
}
ObjectBuilder objectBuilder = getDescriptor(theClass).getObjectBuilder();
for (Enumeration newObjectsEnum = getNewObjectsOriginalToClone().elements();
newObjectsEnum.hasMoreElements();) {
Object object = newObjectsEnum.nextElement();
if (theClass.isInstance(object)) {
// removed dead null check as this method is never called if selectionKey == null
Vector primaryKey = objectBuilder.extractPrimaryKeyFromObject(object, this);
if (new CacheKey(primaryKey).equals(new CacheKey(selectionKey))) {
return object;
}
}
}
return null;
|
public java.lang.Object | getObjectFromNewObjects(oracle.toplink.essentials.expressions.Expression selectionCriteria, java.lang.Class theClass, oracle.toplink.essentials.internal.sessions.AbstractRecord translationRow, oracle.toplink.essentials.queryframework.InMemoryQueryIndirectionPolicy valueHolderPolicy)INTERNAL:
Return any new object matching the expression.
Used for in-memory querying.
// PERF: Avoid initialization of new objects if none.
if (!hasNewObjects()) {
return null;
}
for (Enumeration newObjectsEnum = getNewObjectsOriginalToClone().elements();
newObjectsEnum.hasMoreElements();) {
Object object = newObjectsEnum.nextElement();
if (theClass.isInstance(object)) {
if (selectionCriteria == null) {
return object;
}
if (selectionCriteria.doesConform(object, this, translationRow, valueHolderPolicy)) {
return object;
}
}
}
return null;
|
public oracle.toplink.essentials.internal.helper.IdentityHashtable | getObjectsDeletedDuringCommit()INTERNAL:
Returns all the objects which are deleted during root commit of unit of work.
// PERF: lazy-init (3286089)
if (objectsDeletedDuringCommit == null) {
// 2612538 - the default size of IdentityHashtable (32) is appropriate
objectsDeletedDuringCommit = new IdentityHashtable();
}
return objectsDeletedDuringCommit;
|
public java.util.Hashtable | getOptimisticReadLockObjects()INTERNAL:
Return optimistic read lock objects
if (optimisticReadLockObjects == null) {
optimisticReadLockObjects = new Hashtable(2);
}
return optimisticReadLockObjects;
|
public java.lang.Object | getOriginalVersionOfNewObject(java.lang.Object workingClone)INTERNAL:
Return the original version of the new object (working clone).
// PERF: Avoid initialization of new objects if none.
if (!hasNewObjects()) {
return null;
}
return getNewObjectsCloneToOriginal().get(workingClone);
|
public java.lang.Object | getOriginalVersionOfObject(java.lang.Object workingClone)ADVANCED:
Return the original version of the object(clone) from the parent's identity map.
// Can be null when called from the mappings.
if (workingClone == null) {
return null;
}
ClassDescriptor descriptor = getDescriptor(workingClone);
ObjectBuilder builder = descriptor.getObjectBuilder();
Object implementation = builder.unwrapObject(workingClone, this);
Vector primaryKey = builder.extractPrimaryKeyFromObject(implementation, this);
Object original = getParent().getIdentityMapAccessorInstance().getFromIdentityMap(primaryKey, implementation.getClass(), descriptor, null);
if (original == null) {
// Check if it is a registered new object.
original = getOriginalVersionOfNewObject(implementation);
}
if (original == null) {
// For bug 3013948 looking in the cloneToOriginals mapping will not help
// if the object was never registered.
if (isClassReadOnly(implementation.getClass(), descriptor)) {
return implementation;
}
// The object could have been removed from the cache even though it was in the unit of work.
// fix for 2.5.1.3 PWK (1360)
if (hasCloneToOriginals()) {
original = getCloneToOriginals().get(workingClone);
}
}
if (original == null) {
// This means that it must be an unregistered new object, so register a new clone as its original.
original = buildOriginal(implementation);
}
return original;
|
public java.lang.Object | getOriginalVersionOfObjectOrNull(java.lang.Object workingClone)ADVANCED:
Return the original version of the object(clone) from the parent's identity map.
// Can be null when called from the mappings.
if (workingClone == null) {
return null;
}
ClassDescriptor descriptor = getDescriptor(workingClone);
ObjectBuilder builder = descriptor.getObjectBuilder();
Object implementation = builder.unwrapObject(workingClone, this);
Vector primaryKey = builder.extractPrimaryKeyFromObject(implementation, this);
Object original = getParent().getIdentityMapAccessorInstance().getFromIdentityMap(primaryKey, implementation.getClass(), descriptor, null);
if (original == null) {
// Check if it is a registered new object.
original = getOriginalVersionOfNewObject(implementation);
}
if (original == null) {
// For bug 3013948 looking in the cloneToOriginals mapping will not help
// if the object was never registered.
if (isClassReadOnly(implementation.getClass(), descriptor)) {
return implementation;
}
// The object could have been removed from the cache even though it was in the unit of work.
// fix for 2.5.1.3 PWK (1360)
if (hasCloneToOriginals()) {
original = getCloneToOriginals().get(workingClone);
}
}
return original;
|
public oracle.toplink.essentials.internal.sessions.AbstractSession | getParent()PUBLIC:
Return the parent.
This is a unit of work if nested, otherwise a database session or client session.
return parent;
|
public oracle.toplink.essentials.internal.sessions.AbstractSession | getParentIdentityMapSession(oracle.toplink.essentials.queryframework.DatabaseQuery query, boolean canReturnSelf, boolean terminalOnly)INTERNAL:
Gets the next link in the chain of sessions followed by a query's check
early return, the chain of sessions with identity maps all the way up to
the root session.
Used for session broker which delegates to registered sessions, or UnitOfWork
which checks parent identity map also.
if (canReturnSelf && !terminalOnly) {
return this;
} else {
return getParent().getParentIdentityMapSession(query, true, terminalOnly);
}
|
public oracle.toplink.essentials.internal.helper.IdentityHashtable | getPessimisticLockedObjects()INTERNAL:
if (pessimisticLockedObjects == null) {
// 2612538 - the default size of IdentityHashtable (32) is appropriate
pessimisticLockedObjects = new IdentityHashtable();
}
return pessimisticLockedObjects;
|
public oracle.toplink.essentials.internal.databaseaccess.Platform | getPlatform(java.lang.Class domainClass)INTERNAL:
Return the platform for a particular class.
return getParent().getPlatform(domainClass);
|
public oracle.toplink.essentials.queryframework.DatabaseQuery | getQuery(java.lang.String name, java.util.Vector arguments)PUBLIC:
Return the query from the session pre-defined queries with the given name.
This allows for common queries to be pre-defined, reused and executed by name.
DatabaseQuery query = super.getQuery(name, arguments);
if (query == null) {
query = getParent().getQuery(name, arguments);
}
return query;
|
public oracle.toplink.essentials.queryframework.DatabaseQuery | getQuery(java.lang.String name)PUBLIC:
Return the query from the session pre-defined queries with the given name.
This allows for common queries to be pre-defined, reused and executed by name.
DatabaseQuery query = super.getQuery(name);
if (query == null) {
query = getParent().getQuery(name);
}
return query;
|
public java.util.Hashtable | getReadOnlyClasses()INTERNAL:
Returns the set of read-only classes for the receiver.
Use this method with setReadOnlyClasses() to modify a UnitOfWork's set of read-only
classes before using the UnitOfWork.
return readOnlyClasses;
|
protected oracle.toplink.essentials.internal.helper.IdentityHashtable | getRemovedObjects()INTERNAL:
The removed objects stores any newly registered objects removed during the nested unit of work.
On commit they will all be removed from the parent unit of work.
// PERF: lazy-init (3286089)
if (removedObjects == null) {
// 2612538 - the default size of IdentityHashtable (32) is appropriate
removedObjects = new IdentityHashtable();
}
return removedObjects;
|
public oracle.toplink.essentials.internal.sequencing.Sequencing | getSequencing()INTERNAL:
Return the Sequencing object used by the session.
return getParent().getSequencing();
|
public oracle.toplink.essentials.platform.server.ServerPlatform | getServerPlatform()INTERNAL:
Marked internal as this is not customer API but helper methods for
accessing the server platform from within TopLink's other sessions types
(ie not DatabaseSession)
return getParent().getServerPlatform();
|
public java.lang.String | getSessionTypeString()INTERNAL:
Returns the type of session, its class.
Override to hide from the user when they are using an internal subclass
of a known class.
A user does not need to know that their UnitOfWork is a
non-deferred UnitOfWork, or that their ClientSession is an
IsolatedClientSession.
return "UnitOfWork";
|
public int | getShouldThrowConformExceptions()INTERNAL:
Return whether to throw exceptions on conforming queries
return shouldThrowConformExceptions;
|
public int | getState()INTERNAL:
Find out what the lifecycle state of this UoW is in.
return lifecycle;
|
public java.lang.Object | getTransaction()INTERNAL:
PERF: Return the associated external transaction.
Used to optimize activeUnitOfWork lookup.
return transaction;
|
public oracle.toplink.essentials.changesets.UnitOfWorkChangeSet | getUnitOfWorkChangeSet()ADVANCED:
Returns the currentChangeSet from the UnitOfWork.
This is only valid after the UnitOfWOrk has commited successfully
return unitOfWorkChangeSet;
|
public oracle.toplink.essentials.internal.helper.IdentityHashtable | getUnregisteredExistingObjects()INTERNAL:
Used to lazy Initialize the unregistered existing Objects collection.
if (this.unregisteredExistingObjects == null) {
// 2612538 - the default size of IdentityHashtable (32) is appropriate
this.unregisteredExistingObjects = new IdentityHashtable();
}
return unregisteredExistingObjects;
|
protected oracle.toplink.essentials.internal.helper.IdentityHashtable | getUnregisteredNewObjects()INTERNAL:
This is used to store unregistred objects discovered in the parent so that the child
unit of work knows not to register them on commit.
if (unregisteredNewObjects == null) {
// 2612538 - the default size of IdentityHashtable (32) is appropriate
unregisteredNewObjects = new IdentityHashtable();
}
return unregisteredNewObjects;
|
public int | getValidationLevel()ADVANCED:
The unit of work performs validations such as,
ensuring multiple copies of the same object don't exist in the same unit of work,
ensuring deleted objects are not refered after commit,
ensures that objects from the parent cache are not refered in the unit of work cache.
The level of validation can be increased or decreased for debugging purposes or under
advanced situation where the application requires/desires to violate clone identity in the unit of work.
It is strongly suggested that clone identity not be violate in the unit of work.
return validationLevel;
|
public java.lang.Object | getWorkingCopyFromUnitOfWorkIdentityMap(java.lang.Object object, java.util.Vector primaryKey)INTERNAL:
Return the registered working copy from the unit of work identity map.
If not registered in the unit of work yet, return null
//return the descriptor of the passed object
ClassDescriptor descriptor = getDescriptor(object);
if (descriptor == null) {
throw DescriptorException.missingDescriptor(object.getClass().toString());
}
//aggregated object cannot be registered directly, but through the parent owning object.
if (descriptor.isAggregateDescriptor() || descriptor.isAggregateCollectionDescriptor()) {
throw ValidationException.cannotRegisterAggregateObjectInUnitOfWork(object.getClass());
}
// Check if the working copy is again being registered in which case we return the same working copy
Object registeredObject = getCloneMapping().get(object);
if (registeredObject != null) {
return object;
}
//check the unit of work cache first to see if already registered.
Object objectFromUOWCache = getIdentityMapAccessorInstance().getIdentityMapManager().getFromIdentityMap(primaryKey, object.getClass(), descriptor);
if (objectFromUOWCache != null) {
// Has already been cloned, return the working clone from the IM rather than the passed object.
return objectFromUOWCache;
}
//not found, return null
return null;
|
public boolean | hasChanges()ADVANCED:
The Unit of work is capable of preprocessing to determine if any on the clone have been changed.
This is computationaly expensive and should be avoided on large object graphs.
if (hasNewObjects()) {
return true;
}
IdentityHashtable allObjects = collectAndPrepareObjectsForNestedMerge();
//Using the nested merge prevent the UnitOfWork from assigning sequence numbers
if (!getUnregisteredNewObjects().isEmpty()) {
return true;
}
if (hasDeletedObjects()) {
return true;
}
UnitOfWorkChangeSet changeSet = calculateChanges(allObjects, new UnitOfWorkChangeSet());
return changeSet.hasChanges();
|
protected boolean | hasCloneMapping()
return ((cloneMapping != null) && !cloneMapping.isEmpty());
|
protected boolean | hasCloneToOriginals()
return ((cloneToOriginals != null) && !cloneToOriginals.isEmpty());
|
protected boolean | hasDeferredModifyAllQueries()
return ((deferredModifyAllQueries != null) && !deferredModifyAllQueries.isEmpty());
|
protected boolean | hasDeletedObjects()
return ((deletedObjects != null) && !deletedObjects.isEmpty());
|
protected boolean | hasModifications()INTERNAL:
Does this unit of work have any changes or anything that requires a write
to the database and a transaction to be started.
Should be called after changes are calculated internally by commit.
Note if a transaction was begun prematurely it still needs to be committed.
if (getUnitOfWorkChangeSet().hasChanges() || hasDeletedObjects() || hasModifyAllQueries() || hasDeferredModifyAllQueries() || ((oracle.toplink.essentials.internal.sessions.UnitOfWorkChangeSet)getUnitOfWorkChangeSet()).hasForcedChanges()) {
return true;
} else {
return false;
}
|
protected boolean | hasModifyAllQueries()
return ((modifyAllQueries != null) && !modifyAllQueries.isEmpty());
|
public boolean | hasNewObjects()INTERNAL:
Return if there are any registered new objects.
This is used for both newObjectsOriginalToClone and newObjectsCloneToOriginal as they are always in synch.
PERF: Used to avoid initialization of new objects hashtable unless required.
return ((newObjectsOriginalToClone != null) && !newObjectsOriginalToClone.isEmpty());
|
protected boolean | hasObjectsDeletedDuringCommit()
return ((objectsDeletedDuringCommit != null) && !objectsDeletedDuringCommit.isEmpty());
|
protected boolean | hasRemovedObjects()
return ((removedObjects != null) && !removedObjects.isEmpty());
|
public void | initializeIdentityMapAccessor()INTERNAL:
Set up the IdentityMapManager. This method allows subclasses of Session to override
the default IdentityMapManager functionality.
this.identityMapAccessor = new UnitOfWorkIdentityMapAccessor(this, new IdentityMapManager(this));
|
public java.lang.Object | internalExecuteQuery(oracle.toplink.essentials.queryframework.DatabaseQuery query, oracle.toplink.essentials.internal.sessions.AbstractRecord databaseRow)INTERNAL:
Return the results from exeucting the database query.
the arguments should be a database row with raw data values.
if (!isActive()) {
throw QueryException.querySentToInactiveUnitOfWork(query);
}
return query.executeInUnitOfWork(this, databaseRow);
|
public java.lang.Object | internalRegisterObject(java.lang.Object object, oracle.toplink.essentials.descriptors.ClassDescriptor descriptor)INTERNAL:
Register the object with the unit of work.
This does not perform wrapping or unwrapping.
This is used for internal registration in the merge manager.
if (object == null) {
return null;
}
if (descriptor.isAggregateDescriptor() || descriptor.isAggregateCollectionDescriptor()) {
throw ValidationException.cannotRegisterAggregateObjectInUnitOfWork(object.getClass());
}
Object registeredObject = checkIfAlreadyRegistered(object, descriptor);
if (registeredObject == null) {
registeredObject = checkExistence(object);
if (registeredObject == null) {
// This means that the object is not in the parent im, so was created under this unit of work.
// This means that it must be new.
registeredObject = cloneAndRegisterNewObject(object);
}
}
return registeredObject;
|
public boolean | isActive()PUBLIC:
Return if the unit of work is active. (i.e. has not been released).
return !isDead();
|
public boolean | isAfterWriteChangesButBeforeCommit()INTERNAL:
Has writeChanges() been attempted on this UnitOfWork? It may have
either suceeded or failed but either way the UnitOfWork is in a highly
restricted state.
return ((getLifecycle() == CommitTransactionPending) || (getLifecycle() == WriteChangesFailed));
|
protected boolean | isAfterWriteChangesFailed()INTERNAL:
Once writeChanges has failed all a user can do really is rollback.
return getLifecycle() == WriteChangesFailed;
|
public boolean | isClassReadOnly(java.lang.Class theClass, oracle.toplink.essentials.descriptors.ClassDescriptor descriptor)PUBLIC:
Checks to see if the specified class or descriptor is read-only or not in this UnitOfWork.
if ((descriptor != null) && (descriptor.shouldBeReadOnly())) {
return true;
}
if ((theClass != null) && getReadOnlyClasses().containsKey(theClass)) {
return true;
}
return false;
|
public boolean | isCloneNewObject(java.lang.Object clone)INTERNAL:
Check if the object is already registered.
// PERF: Avoid initialization of new objects if none.
if (!hasNewObjects()) {
return false;
}
return getNewObjectsCloneToOriginal().containsKey(clone);
|
public boolean | isCommitPending()INTERNAL:
Return if the unit of work is waiting to be committed or in the process of being committed.
return getLifecycle() == CommitPending;
|
public boolean | isDead()INTERNAL:
Return if the unit of work is dead.
return getLifecycle() == Death;
|
public boolean | isInTransaction()PUBLIC:
Return whether the session currently has a database transaction in progress.
return getParent().isInTransaction();
|
public boolean | isMergePending()INTERNAL:
Return if the unit of work is waiting to be merged or in the process of being merged.
return getLifecycle() == MergePending;
|
public boolean | isNestedUnitOfWork()PUBLIC:
Return whether this session is a nested unit of work or not.
return false;
|
public boolean | isObjectDeleted(java.lang.Object object)INTERNAL:
Return if the object has been deleted in this unit of work.
boolean isDeleted = false;
if (hasDeletedObjects()) {
isDeleted = getDeletedObjects().containsKey(object);
}
if (getParent().isUnitOfWork()) {
return isDeleted || ((UnitOfWorkImpl)getParent()).isObjectDeleted(object);
} else {
return isDeleted;
}
|
public boolean | isObjectNew(java.lang.Object clone)INTERNAL:
This method is used to determine if the clone is a new Object in the UnitOfWork
//CR3678 - ported from 4.0
return (isCloneNewObject(clone) || (!isObjectRegistered(clone) && !getReadOnlyClasses().contains(clone.getClass()) && !getUnregisteredExistingObjects().contains(clone)));
|
public boolean | isObjectRegistered(java.lang.Object clone)INTERNAL:
Return whether the clone object is already registered.
if (getCloneMapping().containsKey(clone)) {
return true;
}
// We do smart merge here
if (isSmartMerge()){
ClassDescriptor descriptor = getDescriptor(clone);
if (getParent().getIdentityMapAccessorInstance().containsObjectInIdentityMap(keyFromObject(clone, descriptor), clone.getClass(), descriptor) ) {
mergeCloneWithReferences(clone);
// don't put clone in clone mapping since it would result in duplicate clone
return true;
}
}
return false;
|
public boolean | isOriginalNewObject(java.lang.Object original)INTERNAL:
Return whether the original object is new.
It was either registered as new or discovered as a new aggregate
within another new object.
return (hasNewObjects() && getNewObjectsOriginalToClone().containsKey(original)) || getNewAggregates().containsKey(original);
|
public boolean | isPessimisticLocked(java.lang.Object clone)INTERNAL:
return getPessimisticLockedObjects().containsKey(clone);
|
public static boolean | isSmartMerge()INTERNAL:
Return the status of smart merge
return SmartMerge;
|
public boolean | isSynchronized()INTERNAL:
Return if this session is a synchronized unit of work.
return isSynchronized;
|
public boolean | isUnitOfWork()PUBLIC:
Return if this session is a unit of work.
return true;
|
protected void | issueModifyAllQueryList()INTERNAL:
Will notify all the deferred ModifyAllQuery's (excluding UpdateAllQuery's) and deferred UpdateAllQuery's to execute.
if (deferredModifyAllQueries != null) {
for (int i = 0; i < deferredModifyAllQueries.size(); i++) {
Object[] queries = (Object[])deferredModifyAllQueries.get(i);
ModifyAllQuery query = (ModifyAllQuery)queries[0];
AbstractRecord translationRow = (AbstractRecord)queries[1];
getParent().executeQuery(query, translationRow);
}
}
|
public void | issueSQLbeforeCompletion()INTERNAL:
For synchronized units of work, dump SQL to database.
For cases where writes occur before the end of the transaction don't commit
issueSQLbeforeCompletion(true);
|
public void | issueSQLbeforeCompletion(boolean commitTransaction)INTERNAL:
For synchronized units of work, dump SQL to database.
For cases where writes occur before the end of the transaction don't commit
if (getLifecycle() == CommitTransactionPending) {
commitTransactionAfterWriteChanges();
return;
}
// CR#... call event and log.
log(SessionLog.FINER, SessionLog.TRANSACTION, "begin_unit_of_work_commit");
getEventManager().preCommitUnitOfWork();
setLifecycle(CommitPending);
commitToDatabaseWithChangeSet(commitTransaction);
|
private void | logDebugMessage(java.lang.Object object, java.lang.String debugMessage)log the message and debug info if option is set. (reduce the duplicate codes)
log(SessionLog.FINEST, SessionLog.TRANSACTION, debugMessage, object);
|
protected void | mergeChangesIntoParent()INTERNAL: Merge the changes to all objects to the parent.
UnitOfWorkChangeSet uowChangeSet = (UnitOfWorkChangeSet)getUnitOfWorkChangeSet();
if (uowChangeSet == null) {
// may be using the old commit prosess usesOldCommit()
setUnitOfWorkChangeSet(new UnitOfWorkChangeSet());
uowChangeSet = (UnitOfWorkChangeSet)getUnitOfWorkChangeSet();
calculateChanges(getAllClones(), (UnitOfWorkChangeSet)getUnitOfWorkChangeSet());
}
// 3286123 - if no work to be done, skip this part of uow.commit()
if (hasModifications()) {
setPendingMerge();
startOperationProfile(SessionProfiler.Merge);
// Ensure concurrency if cache isolation requires.
getParent().getIdentityMapAccessorInstance().acquireWriteLock();
MergeManager manager = getMergeManager();
if (manager == null){
// no MergeManager created for locks durring commit
manager = new MergeManager(this);
}
try {
if (!isNestedUnitOfWork()) {
preMergeChanges();
}
// Must clone the clone mapping because entries can be added to it during the merging,
// and that can lead to concurrency problems.
getParent().getEventManager().preMergeUnitOfWorkChangeSet(uowChangeSet);
if (!isNestedUnitOfWork() && getDatasourceLogin().shouldSynchronizeObjectLevelReadWrite()) {
setMergeManager(manager);
//If we are merging into the shared cache acquire all required locks before merging.
getParent().getIdentityMapAccessorInstance().getWriteLockManager().acquireRequiredLocks(getMergeManager(), (UnitOfWorkChangeSet)getUnitOfWorkChangeSet());
}
Enumeration changeSetLists = ((UnitOfWorkChangeSet)getUnitOfWorkChangeSet()).getObjectChanges().elements();
while (changeSetLists.hasMoreElements()) {
Hashtable objectChangesList = (Hashtable)((Hashtable)changeSetLists.nextElement()).clone();
if (objectChangesList != null) {// may be no changes for that class type.
for (Enumeration pendingEnum = objectChangesList.elements();
pendingEnum.hasMoreElements();) {
ObjectChangeSet changeSetToWrite = (ObjectChangeSet)pendingEnum.nextElement();
if (changeSetToWrite.hasChanges()) {
Object objectToWrite = changeSetToWrite.getUnitOfWorkClone();
//bug#4154455 -- only merge into the shared cache if the object is new or if it already exists in the shared cache
if (changeSetToWrite.isNew() || (getOriginalVersionOfObjectOrNull(objectToWrite) != null)) {
manager.mergeChanges(objectToWrite, changeSetToWrite);
}
} else {
// if no 'real' changes to the object change set, remove it from the
// list so it won't be unnecessarily sent via cache sync.
uowChangeSet.removeObjectChangeSet(changeSetToWrite);
}
}
}
}
// Notify the queries to merge into the shared cache
if (modifyAllQueries != null) {
for (int i = 0; i < modifyAllQueries.size(); i++) {
ModifyAllQuery query = (ModifyAllQuery)modifyAllQueries.get(i);
query.setSession(getParent());// ensure the query knows which cache to update
query.mergeChangesIntoSharedCache();
}
}
if (isNestedUnitOfWork()) {
changeSetLists = ((UnitOfWorkChangeSet)getUnitOfWorkChangeSet()).getNewObjectChangeSets().elements();
while (changeSetLists.hasMoreElements()) {
IdentityHashtable objectChangesList = (IdentityHashtable)((IdentityHashtable)changeSetLists.nextElement()).clone();
if (objectChangesList != null) {// may be no changes for that class type.
for (Enumeration pendingEnum = objectChangesList.elements();
pendingEnum.hasMoreElements();) {
ObjectChangeSet changeSetToWrite = (ObjectChangeSet)pendingEnum.nextElement();
if (changeSetToWrite.hasChanges()) {
Object objectToWrite = changeSetToWrite.getUnitOfWorkClone();
manager.mergeChanges(objectToWrite, changeSetToWrite);
} else {
// if no 'real' changes to the object change set, remove it from the
// list so it won't be unnecessarily sent via cache sync.
uowChangeSet.removeObjectChangeSet(changeSetToWrite);
}
}
}
}
}
if (!isNestedUnitOfWork()) {
//If we are merging into the shared cache release all of the locks that we acquired.
getParent().getIdentityMapAccessorInstance().getWriteLockManager().releaseAllAcquiredLocks(manager);
setMergeManager(null);
postMergeChanges();
}
} finally {
if (!isNestedUnitOfWork() && !manager.getAcquiredLocks().isEmpty()) {
// if the locks have not already been released (!acquiredLocks.empty)
// then there must have been an error, release all of the locks.
getParent().getIdentityMapAccessorInstance().getWriteLockManager().releaseAllAcquiredLocks(manager);
setMergeManager(null);
}
getParent().getIdentityMapAccessorInstance().releaseWriteLock();
getParent().getEventManager().postMergeUnitOfWorkChangeSet(uowChangeSet);
endOperationProfile(SessionProfiler.Merge);
}
}
|
public java.lang.Object | mergeClone(java.lang.Object rmiClone)PUBLIC:
Merge the attributes of the clone into the unit of work copy.
This can be used for objects that are returned from the client through
RMI serialization (or another serialization mechanism), because the RMI object
will be a clone this will merge its attributes correctly to preserve object
identity within the unit of work and record its changes.
The object and its private owned parts are merged.
return mergeClone(rmiClone, MergeManager.CASCADE_PRIVATE_PARTS);
|
public java.lang.Object | mergeClone(java.lang.Object rmiClone, int cascadeDepth)INTERNAL:
Merge the attributes of the clone into the unit of work copy.
if (rmiClone == null) {
return null;
}
//CR#2272
logDebugMessage(rmiClone, "merge_clone");
startOperationProfile(SessionProfiler.Merge);
ObjectBuilder builder = getDescriptor(rmiClone).getObjectBuilder();
Object implementation = builder.unwrapObject(rmiClone, this);
MergeManager manager = new MergeManager(this);
manager.mergeCloneIntoWorkingCopy();
manager.setCascadePolicy(cascadeDepth);
Object merged = null;
try {
merged = manager.mergeChanges(implementation, null);
} catch (RuntimeException exception) {
merged = handleException(exception);
}
endOperationProfile(SessionProfiler.Merge);
return merged;
|
public java.lang.Object | mergeCloneWithReferences(java.lang.Object rmiClone)PUBLIC:
Merge the attributes of the clone into the unit of work copy.
This can be used for objects that are returned from the client through
RMI serialization (or another serialization mechanism), because the RMI object
will be a clone this will merge its attributes correctly to preserve object
identity within the unit of work and record its changes.
The object and its private owned parts are merged. This will include references from
dependent objects to independent objects.
return this.mergeCloneWithReferences(rmiClone, MergeManager.CASCADE_PRIVATE_PARTS);
|
public java.lang.Object | mergeCloneWithReferences(java.lang.Object rmiClone, int cascadePolicy)PUBLIC:
Merge the attributes of the clone into the unit of work copy.
This can be used for objects that are returned from the client through
RMI serialization (or another serialization mechanism), because the RMI object
will be a clone this will merge its attributes correctly to preserve object
identity within the unit of work and record its changes.
The object and its private owned parts are merged. This will include references from
dependent objects to independent objects.
return mergeCloneWithReferences(rmiClone, cascadePolicy, false);
|
public java.lang.Object | mergeCloneWithReferences(java.lang.Object rmiClone, int cascadePolicy, boolean forceCascade)INTERNAL:
Merge the attributes of the clone into the unit of work copy.
This can be used for objects that are returned from the client through
RMI serialization (or another serialization mechanism), because the RMI object
will be a clone this will merge its attributes correctly to preserve object
identity within the unit of work and record its changes.
The object and its private owned parts are merged. This will include references from
dependent objects to independent objects.
if (rmiClone == null) {
return null;
}
ClassDescriptor descriptor = getDescriptor(rmiClone);
if ((descriptor == null) || descriptor.isAggregateDescriptor() || descriptor.isAggregateCollectionDescriptor()) {
if (cascadePolicy == MergeManager.CASCADE_BY_MAPPING){
throw new IllegalArgumentException(ExceptionLocalization.buildMessage("not_an_entity", new Object[]{rmiClone}));
}
return rmiClone;
}
//CR#2272
logDebugMessage(rmiClone, "merge_clone_with_references");
ObjectBuilder builder = descriptor.getObjectBuilder();
Object implementation = builder.unwrapObject(rmiClone, this);
MergeManager manager = new MergeManager(this);
manager.mergeCloneWithReferencesIntoWorkingCopy();
manager.setCascadePolicy(cascadePolicy);
manager.setForceCascade(forceCascade);
Object mergedObject = manager.mergeChanges(implementation, null);
if (isSmartMerge()) {
return builder.wrapObject(mergedObject, this);
} else {
return mergedObject;
}
|
public void | mergeClonesAfterCompletion()INTERNAL:
for synchronized units of work, merge changes into parent
mergeChangesIntoParent();
// CR#... call event and log.
getEventManager().postCommitUnitOfWork();
log(SessionLog.FINER, SessionLog.TRANSACTION, "end_unit_of_work_commit");
|
public java.lang.Object | newInstance(java.lang.Class theClass)PUBLIC:
Return a new instance of the class registered in this unit of work.
This can be used to ensure that new objects are registered correctly.
//CR#2272
logDebugMessage(theClass, "new_instance");
ClassDescriptor descriptor = getDescriptor(theClass);
Object newObject = descriptor.getObjectBuilder().buildNewInstance();
return registerObject(newObject);
|
public void | performFullValidation()ADVANCED:
The unit of work performs validations such as,
ensuring multiple copies of the same object don't exist in the same unit of work,
ensuring deleted objects are not refered after commit,
ensures that objects from the parent cache are not refered in the unit of work cache.
The level of validation can be increased or decreased for debugging purposes or under
advanced situation where the application requires/desires to violate clone identity in the unit of work.
It is strongly suggested that clone identity not be violate in the unit of work.
setValidationLevel(Full);
|
public void | performPartialValidation()ADVANCED:
The unit of work performs validations such as,
ensuring multiple copies of the same object don't exist in the same unit of work,
ensuring deleted objects are not refered after commit,
ensures that objects from the parent cache are not refered in the unit of work cache.
The level of validation can be increased or decreased for debugging purposes or under
advanced situation where the application requires/desires to violate clone identity in the unit of work.
It is strongly suggested that clone identity not be violate in the unit of work.
setValidationLevel(Partial);
|
public void | performRemove(java.lang.Object toBeDeleted, oracle.toplink.essentials.internal.helper.IdentityHashtable visitedObjects)INTERNAL:
This method will perform a delete operation on the provided objects pre-determing
the objects that will be deleted by a commit of the UnitOfWork including privately
owned objects. It does not execute a query for the deletion of these objects as the
normal deleteobject operation does. Mainly implemented to provide EJB 3.0 deleteObject
support.
try {
if (toBeDeleted == null) {
return;
}
ClassDescriptor descriptor = getDescriptor(toBeDeleted);
if ((descriptor == null) || descriptor.isAggregateDescriptor() || descriptor.isAggregateCollectionDescriptor()) {
throw new IllegalArgumentException(ExceptionLocalization.buildMessage("not_an_entity", new Object[]{toBeDeleted}));
}
logDebugMessage(toBeDeleted, "deleting_object");
startOperationProfile(SessionProfiler.DeletedObject);
//bug 4568370+4599010; fix EntityManager.remove() to handle new objects
if (getDeletedObjects().contains(toBeDeleted)){
return;
}
visitedObjects.put(toBeDeleted,toBeDeleted);
Object registeredObject = checkIfAlreadyRegistered(toBeDeleted, descriptor);
if (registeredObject == null) {
Vector primaryKey = descriptor.getObjectBuilder().extractPrimaryKeyFromObject(toBeDeleted, this);
DoesExistQuery existQuery = descriptor.getQueryManager().getDoesExistQuery();
existQuery = (DoesExistQuery)existQuery.clone();
existQuery.setObject(toBeDeleted);
existQuery.setPrimaryKey(primaryKey);
existQuery.setDescriptor(descriptor);
existQuery.setCheckCacheFirst(true);
if (((Boolean)executeQuery(existQuery)).booleanValue()){
throw new IllegalArgumentException(ExceptionLocalization.buildMessage("cannot_remove_detatched_entity", new Object[]{toBeDeleted}));
}//else, it is a new or previously deleted object that should be ignored (and delete should cascade)
}else{
//fire events only if this is a managed object
if (descriptor.getEventManager().hasAnyEventListeners()) {
oracle.toplink.essentials.descriptors.DescriptorEvent event = new oracle.toplink.essentials.descriptors.DescriptorEvent(toBeDeleted);
event.setEventCode(DescriptorEventManager.PreRemoveEvent);
event.setSession(this);
descriptor.getEventManager().executeEvent(event);
}
if (hasNewObjects() && getNewObjectsOriginalToClone().contains(registeredObject)){
unregisterObject(registeredObject, DescriptorIterator.NoCascading);
}else{
getDeletedObjects().put(toBeDeleted, toBeDeleted);
}
}
descriptor.getObjectBuilder().cascadePerformRemove(toBeDeleted, this, visitedObjects);
} finally {
endOperationProfile(SessionProfiler.DeletedObject);
}
|
protected void | populateAndRegisterObject(java.lang.Object original, java.lang.Object workingClone, java.util.Vector primaryKey, oracle.toplink.essentials.descriptors.ClassDescriptor descriptor, java.lang.Object writeLockValue, long readTime, oracle.toplink.essentials.internal.queryframework.JoinedAttributeManager joinedAttributeManager)INTERNAL:
This method is called from clone and register. It includes the processing
required to clone an object, including populating attributes, putting in
UOW identitymap and building a backupclone
// This must be registered before it is built to avoid cycles.
getIdentityMapAccessorInstance().putInIdentityMap(workingClone, primaryKey, writeLockValue, readTime, descriptor);
//Set ChangeListener for ObjectChangeTrackingPolicy and AttributeChangeTrackingPolicy,
//but not DeferredChangeDetectionPolicy. Build backup clone for DeferredChangeDetectionPolicy
//or ObjectChangeTrackingPolicy, but not for AttributeChangeTrackingPolicy.
// - Set listener before populating attributes so aggregates can find the parent's listener
descriptor.getObjectChangePolicy().setChangeListener(workingClone, this, descriptor);
descriptor.getObjectChangePolicy().dissableEventProcessing(workingClone);
ObjectBuilder builder = descriptor.getObjectBuilder();
builder.populateAttributesForClone(original, workingClone, this, joinedAttributeManager);
Object backupClone = descriptor.getObjectChangePolicy().buildBackupClone(workingClone, builder, this);
getCloneMapping().put(workingClone, backupClone);
descriptor.getObjectChangePolicy().enableEventProcessing(workingClone);
|
protected void | postMergeChanges()INTERNAL:
Remove objects from parent's identity map.
//bug 4730595: objects removed during flush are not removed from the cache during commit
if (!this.getUnitOfWorkChangeSet().getDeletedObjects().isEmpty()){
oracle.toplink.essentials.internal.helper.IdentityHashtable deletedObjects = this.getUnitOfWorkChangeSet().getDeletedObjects();
for (Enumeration removedObjects = deletedObjects.keys(); removedObjects.hasMoreElements(); ) {
ObjectChangeSet removedObjectChangeSet = (ObjectChangeSet) removedObjects.nextElement();
java.util.Vector primaryKeys = removedObjectChangeSet.getPrimaryKeys();
getParent().getIdentityMapAccessor().removeFromIdentityMap(primaryKeys, removedObjectChangeSet.getClassType(this));
}
}
|
protected void | preMergeChanges()INTERNAL:
Remove objects deleted during commit from clone and new object cache so that these are not merged
if (hasObjectsDeletedDuringCommit()) {
for (Enumeration removedObjects = getObjectsDeletedDuringCommit().keys();
removedObjects.hasMoreElements();) {
Object removedObject = removedObjects.nextElement();
getCloneMapping().remove(removedObject);
getAllClones().remove(removedObject);
// PERF: Avoid initialization of new objects if none.
if (hasNewObjects()) {
Object referenceObjectToRemove = getNewObjectsCloneToOriginal().get(removedObject);
if (referenceObjectToRemove != null) {
getNewObjectsCloneToOriginal().remove(removedObject);
getNewObjectsOriginalToClone().remove(referenceObjectToRemove);
}
}
}
}
|
public void | printRegisteredObjects()PUBLIC:
Print the objects in the unit of work.
The output of this method will be logged to this unit of work's SessionLog at SEVERE level.
if (shouldLog(SessionLog.SEVERE, SessionLog.CACHE)) {
basicPrintRegisteredObjects();
}
|
public java.lang.Object | processDeleteObjectQuery(oracle.toplink.essentials.queryframework.DeleteObjectQuery deleteQuery)INTERNAL:
This method is used to process delete queries that pass through the unitOfWork
It is extracted out of the internalExecuteQuery method to reduce duplication
// We must ensure that we delete the clone not the original, (this can happen in the mappings update)
if (deleteQuery.getObject() == null) {// Must validate.
throw QueryException.objectToModifyNotSpecified(deleteQuery);
}
ClassDescriptor descriptor = getDescriptor(deleteQuery.getObject());
ObjectBuilder builder = descriptor.getObjectBuilder();
Object implementation = builder.unwrapObject(deleteQuery.getObject(), this);
if (isClassReadOnly(implementation.getClass(), descriptor)) {
throw QueryException.cannotDeleteReadOnlyObject(implementation);
}
if (isCloneNewObject(implementation)) {
unregisterObject(implementation);
return implementation;
}
Vector primaryKey = builder.extractPrimaryKeyFromObject(implementation, this);
Object clone = getIdentityMapAccessorInstance().getFromIdentityMap(primaryKey, implementation.getClass(), descriptor, null);
if (clone == null) {
clone = implementation;
}
// Register will wrap so must unwrap again.
clone = builder.unwrapObject(clone, this);
deleteQuery.setObject(clone);
if (!getCommitManager().isActive()) {
getDeletedObjects().put(clone, primaryKey);
return clone;
} else {
// If the object has already been deleted i.e. private-owned + deleted then don't do it twice.
if (hasObjectsDeletedDuringCommit()) {
if (getObjectsDeletedDuringCommit().containsKey(clone)) {
return clone;
}
}
}
return null;
|
public java.util.Vector | registerAllObjects(java.util.Collection domainObjects)PUBLIC:
Register the objects with the unit of work.
All newly created root domain objects must be registered to be inserted on commit.
Also any existing objects that will be edited and were not read from this unit of work
must also be registered.
Once registered any changes to the objects will be commited to the database on commit.
Vector clones = new Vector(domainObjects.size());
for (Iterator objectsEnum = domainObjects.iterator(); objectsEnum.hasNext();) {
clones.addElement(registerObject(objectsEnum.next()));
}
return clones;
|
public java.util.Vector | registerAllObjects(java.util.Vector domainObjects)PUBLIC:
Register the objects with the unit of work.
All newly created root domain objects must be registered to be inserted on commit.
Also any existing objects that will be edited and were not read from this unit of work
must also be registered.
Once registered any changes to the objects will be commited to the database on commit.
Vector clones = new Vector(domainObjects.size());
for (Enumeration objectsEnum = domainObjects.elements(); objectsEnum.hasMoreElements();) {
clones.addElement(registerObject(objectsEnum.nextElement()));
}
return clones;
|
public synchronized java.lang.Object | registerExistingObject(java.lang.Object existingObject, oracle.toplink.essentials.internal.queryframework.JoinedAttributeManager joinedAttributeManager)INTERNAL:
Register the existing object with the unit of work.
This is a advanced API that can be used if the application can guarentee the object exists on the database.
When registerObject is called the unit of work determines existence through the descriptor's doesExist setting.
if (existingObject == null) {
return null;
}
ClassDescriptor descriptor = getDescriptor(existingObject);
if (descriptor == null) {
throw DescriptorException.missingDescriptor(existingObject.getClass().toString());
}
if (this.isClassReadOnly(descriptor.getJavaClass(), descriptor)) {
return existingObject;
}
ObjectBuilder builder = descriptor.getObjectBuilder();
Object implementation = builder.unwrapObject(existingObject, this);
Object registeredObject = this.registerExistingObject(implementation, descriptor, joinedAttributeManager);
// Bug # 3212057 - workaround JVM bug (MWN)
if (implementation != existingObject) {
return builder.wrapObject(registeredObject, this);
} else {
return registeredObject;
}
|
public synchronized java.lang.Object | registerExistingObject(java.lang.Object existingObject)ADVANCED:
Register the existing object with the unit of work.
This is a advanced API that can be used if the application can guarentee the object exists on the database.
When registerObject is called the unit of work determines existence through the descriptor's doesExist setting.
return registerExistingObject(existingObject, null);
|
protected synchronized java.lang.Object | registerExistingObject(java.lang.Object objectToRegister, oracle.toplink.essentials.descriptors.ClassDescriptor descriptor, oracle.toplink.essentials.internal.queryframework.JoinedAttributeManager joinedAttributeManager)INTERNAL:
Register the existing object with the unit of work.
This is a advanced API that can be used if the application can guarentee the object exists on the database.
When registerObject is called the unit of work determines existence through the descriptor's doesExist setting.
if (isAfterWriteChangesButBeforeCommit()) {
throw ValidationException.illegalOperationForUnitOfWorkLifecycle(getLifecycle(), "registerExistingObject");
}
if (descriptor.isAggregateDescriptor() || descriptor.isAggregateCollectionDescriptor()) {
throw ValidationException.cannotRegisterAggregateObjectInUnitOfWork(objectToRegister.getClass());
}
//CR#2272
logDebugMessage(objectToRegister, "register_existing");
Object registeredObject;
try {
startOperationProfile(SessionProfiler.Register);
registeredObject = checkIfAlreadyRegistered(objectToRegister, descriptor);
if (registeredObject == null) {
// Check if object is existing, if it is it must be cloned into the unit of work
// otherwise it is a new object
Vector primaryKey = descriptor.getObjectBuilder().extractPrimaryKeyFromObject(objectToRegister, this);
// Always check the cache first.
registeredObject = getIdentityMapAccessorInstance().getFromIdentityMap(primaryKey, objectToRegister.getClass(), descriptor, joinedAttributeManager);
if (registeredObject == null) {
// This is a case where the object is not in the session cache,
// so a new cache-key is used as there is no original to use for locking.
registeredObject = cloneAndRegisterObject(objectToRegister, new CacheKey(primaryKey), joinedAttributeManager);
}
}
//bug3659327
//fetch group manager control fetch group support
if (descriptor.hasFetchGroupManager()) {
//if the object is already registered in uow, but it's partially fetched (fetch group case)
if (descriptor.getFetchGroupManager().shouldWriteInto(objectToRegister, registeredObject)) {
//there might be cases when reverting/refreshing clone is needed.
descriptor.getFetchGroupManager().writePartialIntoClones(objectToRegister, registeredObject, this);
}
}
} finally {
endOperationProfile(SessionProfiler.Register);
}
return registeredObject;
|
public synchronized java.lang.Object | registerNewObject(java.lang.Object newObject)ADVANCED:
Register the new object with the unit of work.
This will register the new object without cloning.
Normally the registerObject method should be used for all registration of new and existing objects.
This version of the register method can only be used for new objects.
This method should only be used if a new object is desired to be registered without cloning.
if (newObject == null) {
return null;
}
ClassDescriptor descriptor = getDescriptor(newObject);
if (descriptor == null) {
throw DescriptorException.missingDescriptor(newObject.getClass().toString());
}
ObjectBuilder builder = descriptor.getObjectBuilder();
Object implementation = builder.unwrapObject(newObject, this);
this.registerNewObject(implementation, descriptor);
if (implementation == newObject) {
return newObject;
} else {
return builder.wrapObject(implementation, this);
}
|
protected synchronized java.lang.Object | registerNewObject(java.lang.Object implementation, oracle.toplink.essentials.descriptors.ClassDescriptor descriptor)INTERNAL:
Updated to allow passing in of the object's descriptor
Register the new object with the unit of work.
This will register the new object without cloning.
Normally the registerObject method should be used for all registration of new and existing objects.
This version of the register method can only be used for new objects.
This method should only be used if a new object is desired to be registered without cloning.
if (isAfterWriteChangesButBeforeCommit()) {
throw ValidationException.illegalOperationForUnitOfWorkLifecycle(getLifecycle(), "registerNewObject");
}
if (descriptor.isAggregateDescriptor() || descriptor.isAggregateCollectionDescriptor()) {
throw ValidationException.cannotRegisterAggregateObjectInUnitOfWork(implementation.getClass());
}
try {
//CR#2272
logDebugMessage(implementation, "register_new");
startOperationProfile(SessionProfiler.Register);
Object registeredObject = checkIfAlreadyRegistered(implementation, descriptor);
if (registeredObject == null) {
// Ensure that the registered object is the one from the parent cache.
if (shouldPerformFullValidation()) {
Vector primaryKey = descriptor.getObjectBuilder().extractPrimaryKeyFromObject(implementation, this);
Object objectFromCache = getParent().getIdentityMapAccessorInstance().getFromIdentityMap(primaryKey, implementation.getClass(), descriptor, null);
if (objectFromCache != null) {
throw ValidationException.wrongObjectRegistered(implementation, objectFromCache);
}
}
ObjectBuilder builder = descriptor.getObjectBuilder();
Object original = builder.buildNewInstance();
registerNewObjectClone(implementation, original, descriptor);
Object backupClone = builder.buildNewInstance();
getCloneMapping().put(implementation, backupClone);
// Check if the new objects should be cached.
registerNewObjectInIdentityMap(implementation, implementation);
}
} finally {
endOperationProfile(SessionProfiler.Register);
}
//as this is register new return the object passed in.
return implementation;
|
protected void | registerNewObjectClone(java.lang.Object clone, java.lang.Object original, oracle.toplink.essentials.descriptors.ClassDescriptor descriptor)INTERNAL:
Register the working copy of a new object and its original.
The user must edit the working copy and the original is used to merge into the parent.
This mapping is kept both ways because lookup is required in both directions.
// Check if the new objects should be cached.
registerNewObjectInIdentityMap(clone, original);
getNewObjectsCloneToOriginal().put(clone, original);
getNewObjectsOriginalToClone().put(original, clone);
// run prePersist callbacks if any
logDebugMessage(clone, "register_new_for_persist");
if (descriptor.getEventManager().hasAnyEventListeners()) {
oracle.toplink.essentials.descriptors.DescriptorEvent event = new oracle.toplink.essentials.descriptors.DescriptorEvent(clone);
event.setEventCode(DescriptorEventManager.PrePersistEvent);
event.setSession(this);
descriptor.getEventManager().executeEvent(event);
}
|
public synchronized void | registerNewObjectForPersist(java.lang.Object newObject, oracle.toplink.essentials.internal.helper.IdentityHashtable visitedObjects)INTERNAL:
Register the new object with the unit of work.
This will register the new object without cloning.
Checks based on existence will be completed and the create will be cascaded based on the
object's mappings cascade requirements. This is specific to EJB 3.0 support and is
try {
if (newObject == null) {
return;
}
ClassDescriptor descriptor = getDescriptor(newObject);
if ((descriptor == null) || descriptor.isAggregateDescriptor() || descriptor.isAggregateCollectionDescriptor()) {
throw new IllegalArgumentException(ExceptionLocalization.buildMessage("not_an_entity", new Object[]{newObject}));
}
startOperationProfile(SessionProfiler.Register);
Object registeredObject = checkIfAlreadyRegistered(newObject, descriptor);
if (registeredObject == null) {
registerNotRegisteredNewObjectForPersist(newObject, descriptor);
} else if (this.isObjectDeleted(newObject)){
//if object is deleted and a create is issued on the that object
// then the object must be transitioned back to existing and not deleted
this.undeleteObject(newObject);
}
descriptor.getObjectBuilder().cascadeRegisterNewForCreate(newObject, this, visitedObjects);
} finally {
endOperationProfile(SessionProfiler.Register);
}
|
protected void | registerNewObjectInIdentityMap(java.lang.Object clone, java.lang.Object original)INTERNAL:
Add the new object to the cache if set to.
This is useful for using mergeclone on new objects.
// CR 2728 Added check for sequencing to allow zero primitives for id's if the client
//is not using sequencing.
Class cls = clone.getClass();
ClassDescriptor descriptor = getDescriptor(cls);
boolean usesSequences = descriptor.usesSequenceNumbers();
if (shouldNewObjectsBeCached()) {
// Also put it in the cache if it has a valid primary key, this allows for double new object merges
Vector key = keyFromObject(clone, descriptor);
boolean containsNull = false;
// begin CR#2041 Unit Of Work incorrectly put new objects with a primitive primary key in its cache
Object pkElement;
for (int index = 0; index < key.size(); index++) {
pkElement = key.elementAt(index);
if (pkElement == null) {
containsNull = true;
} else if (usesSequences) {
containsNull = containsNull || getSequencing().shouldOverrideExistingValue(cls, pkElement);
}
}
// end cr #2041
if (!containsNull) {
getIdentityMapAccessorInstance().putInIdentityMap(clone, key, null, 0, descriptor);
}
}
|
protected void | registerNotRegisteredNewObjectForPersist(java.lang.Object newObject, oracle.toplink.essentials.descriptors.ClassDescriptor descriptor)INTERNAL:
Called only by registerNewObjectForPersist method,
and only if newObject is not already registered.
Could be overridden in subclasses.
// Ensure that the registered object is not detached.
newObject.getClass();
DoesExistQuery existQuery = descriptor.getQueryManager().getDoesExistQuery();
existQuery = (DoesExistQuery)existQuery.clone();
existQuery.setObject(newObject);
existQuery.setDescriptor(descriptor);
// only check the cache as we can wait until commit for the unique
// constraint error to be thrown. This does ignore user's settings
// on descriptor but calling persist() tells us the object is new.
existQuery.checkCacheForDoesExist();
if (((Boolean)executeQuery(existQuery)).booleanValue()) {
throw ValidationException.cannotPersistExistingObject(newObject, this);
}
ObjectBuilder builder = descriptor.getObjectBuilder();
Object original = builder.buildNewInstance();
registerNewObjectClone(newObject, original, descriptor);
Object backupClone = builder.buildNewInstance();
getCloneMapping().put(newObject, backupClone);
assignSequenceNumber(newObject);
// Check if the new objects should be cached.
registerNewObjectInIdentityMap(newObject, newObject);
|
public synchronized java.lang.Object | registerObject(java.lang.Object object)PUBLIC:
Register the object with the unit of work.
All newly created root domain objects must be registered to be inserted on commit.
Also any existing objects that will be edited and were not read from this unit of work
must also be registered.
Once registered any changes to the objects will be commited to the database on commit.
if (object == null) {
return null;
}
ClassDescriptor descriptor = getDescriptor(object);
if (descriptor == null) {
throw DescriptorException.missingDescriptor(object.getClass().toString());
}
if (this.isClassReadOnly(descriptor.getJavaClass(), descriptor)) {
return object;
}
ObjectBuilder builder = descriptor.getObjectBuilder();
Object implementation = builder.unwrapObject(object, this);
boolean wasWrapped = implementation != object;
Object registeredObject = this.registerObject(implementation, descriptor);
if (wasWrapped) {
return builder.wrapObject(registeredObject, this);
} else {
return registeredObject;
}
|
protected synchronized java.lang.Object | registerObject(java.lang.Object object, oracle.toplink.essentials.descriptors.ClassDescriptor descriptor)INTERNAL:
Allows for calling method to provide the descriptor information for this
object. Prevents double lookup of descriptor.
Register the object with the unit of work.
All newly created root domain objects must be registered to be inserted on commit.
Also any existing objects that will be edited and were not read from this unit of work
must also be registered.
Once registered any changes to the objects will be commited to the database on commit.
calling this method will also sort the objects into different different groups
depending on if the object being registered is a bean or a regular Java
object and if its updates are deferred, non-deferred or if all modifications
are deferred.
if (this.isClassReadOnly(descriptor.getJavaClass(), descriptor)) {
return object;
}
if (isAfterWriteChangesButBeforeCommit()) {
throw ValidationException.illegalOperationForUnitOfWorkLifecycle(getLifecycle(), "registerObject");
}
//CR#2272
logDebugMessage(object, "register");
Object registeredObject;
try {
startOperationProfile(SessionProfiler.Register);
registeredObject = internalRegisterObject(object, descriptor);
} finally {
endOperationProfile(SessionProfiler.Register);
}
return registeredObject;
|
public void | registerWithTransactionIfRequired()INTERNAL:
Register this UnitOfWork against an external transaction controller
if (getParent().hasExternalTransactionController() && ! isSynchronized()) {
boolean hasAlreadyStarted = getParent().wasJTSTransactionInternallyStarted();
getParent().getExternalTransactionController().registerSynchronizationListener(this, getParent());
// CR#2998 - registerSynchronizationListener may toggle the wasJTSTransactionInternallyStarted
// flag. As a result, we must compare the states and if the state is changed, then we must set the
// setWasTransactionBegunPrematurely flag to ensure that we handle the transaction depth count
// appropriately
if (!hasAlreadyStarted && getParent().wasJTSTransactionInternallyStarted()) {
// registerSynchronizationListener caused beginTransaction() called
// and an external transaction internally started.
this.setWasTransactionBegunPrematurely(true);
}
}
|
public void | release()PUBLIC:
Release the unit of work. This terminates this unit of work.
Because the unit of work operates on its own object space (clones) no work is required.
The unit of work should no longer be used or referenced by the application beyond this point
so that it can be garbage collected.
log(SessionLog.FINER, SessionLog.TRANSACTION, "release_unit_of_work");
getEventManager().preReleaseUnitOfWork();
// If already succeeded at a writeChanges(), then transaction still open.
// As already issued sql must at least mark the external transaction for rollback only.
if (getLifecycle() == CommitTransactionPending) {
if (hasModifications() || wasTransactionBegunPrematurely()) {
rollbackTransaction(false);
setWasTransactionBegunPrematurely(false);
}
} else if (wasTransactionBegunPrematurely() && (!isNestedUnitOfWork())) {
rollbackTransaction();
setWasTransactionBegunPrematurely(false);
}
if ((getMergeManager() != null) && (getMergeManager().getAcquiredLocks() != null) && (!getMergeManager().getAcquiredLocks().isEmpty())) {
//may have unreleased cache locks because of a rollback... As some
//locks may be acquired durring commit.
getParent().getIdentityMapAccessorInstance().getWriteLockManager().releaseAllAcquiredLocks(getMergeManager());
this.setMergeManager(null);
}
setDead();
if(shouldClearForCloseOnRelease()) {
clearForClose(true);
}
getParent().releaseUnitOfWork(this);
getEventManager().postReleaseUnitOfWork();
|
public void | removeAllReadOnlyClasses()PUBLIC:
Empties the set of read-only classes.
It is illegal to call this method on nested UnitOfWork objects. A nested UnitOfWork
cannot have a subset of its parent's set of read-only classes.
Also removes classes which are read only because their descriptors are readonly
if (isNestedUnitOfWork()) {
throw ValidationException.cannotRemoveFromReadOnlyClassesInNestedUnitOfWork();
}
getReadOnlyClasses().clear();
|
public void | removeForceUpdateToVersionField(java.lang.Object lockObject)ADVANCED:
Remove optimistic read lock from the object
See forceUpdateToVersionField(Object)
getOptimisticReadLockObjects().remove(lockObject);
|
public void | removeReadOnlyClass(java.lang.Class theClass)PUBLIC:
Removes a Class from the receiver's set of read-only classes.
It is illegal to try to send this method to a nested UnitOfWork.
if (!canChangeReadOnlySet()) {
throw ValidationException.cannotModifyReadOnlyClassesSetAfterUsingUnitOfWork();
}
if (isNestedUnitOfWork()) {
throw ValidationException.cannotRemoveFromReadOnlyClassesInNestedUnitOfWork();
}
getReadOnlyClasses().remove(theClass);
|
protected void | resetAllCloneCollection()INTERNAL:
Used in the resume to reset the all clones collection
this.allClones = null;
|
public void | revertAndResume()PUBLIC:
Revert all changes made to any registered object.
Clear all deleted and new objects.
Revert should not be confused with release which it the normal compliment to commit.
Revert is more similar to commit and resume, however reverts all changes and resumes.
If you do not require to resume the unit of work release should be used instead.
if (isAfterWriteChangesButBeforeCommit()) {
throw ValidationException.illegalOperationForUnitOfWorkLifecycle(getLifecycle(), "revertAndResume");
}
log(SessionLog.FINER, SessionLog.TRANSACTION, "revert_unit_of_work");
MergeManager manager = new MergeManager(this);
manager.mergeOriginalIntoWorkingCopy();
manager.cascadeAllParts();
for (Enumeration cloneEnum = getCloneMapping().keys(); cloneEnum.hasMoreElements();) {
Object clone = cloneEnum.nextElement();
// Revert each clone.
manager.mergeChanges(clone, null);
ClassDescriptor descriptor = this.getDescriptor(clone);
//revert the tracking policy
descriptor.getObjectChangePolicy().revertChanges(clone, descriptor, this, this.getCloneMapping());
}
// PERF: Avoid initialization of new objects if none.
if (hasNewObjects()) {
for (Enumeration cloneEnum = getNewObjectsCloneToOriginal().keys();
cloneEnum.hasMoreElements();) {
Object clone = cloneEnum.nextElement();
// De-register the object.
getCloneMapping().remove(clone);
}
if (this.getUnitOfWorkChangeSet() != null){
((UnitOfWorkChangeSet)this.getUnitOfWorkChangeSet()).getNewObjectChangeSets().clear();
}
}
// Clear new and deleted objects.
setNewObjectsCloneToOriginal(null);
setNewObjectsOriginalToClone(null);
// Reset the all clones collection
resetAllCloneCollection();
// 2612538 - the default size of IdentityHashtable (32) is appropriate
setObjectsDeletedDuringCommit(new IdentityHashtable());
setDeletedObjects(new IdentityHashtable());
setRemovedObjects(new IdentityHashtable());
setUnregisteredNewObjects(new IdentityHashtable());
log(SessionLog.FINER, SessionLog.TRANSACTION, "resume_unit_of_work");
|
public java.lang.Object | revertObject(java.lang.Object clone)PUBLIC:
Revert the object's attributes from the parent.
This also reverts the object privately-owned parts.
return revertObject(clone, MergeManager.CASCADE_PRIVATE_PARTS);
|
public java.lang.Object | revertObject(java.lang.Object clone, int cascadeDepth)INTERNAL:
Revert the object's attributes from the parent.
This uses merging to merge the object changes.
if (clone == null) {
return null;
}
//CR#2272
logDebugMessage(clone, "revert");
ClassDescriptor descriptor = getDescriptor(clone);
ObjectBuilder builder = descriptor.getObjectBuilder();
Object implementation = builder.unwrapObject(clone, this);
MergeManager manager = new MergeManager(this);
manager.mergeOriginalIntoWorkingCopy();
manager.setCascadePolicy(cascadeDepth);
try {
manager.mergeChanges(implementation, null);
} catch (RuntimeException exception) {
return handleException(exception);
}
return clone;
|
public void | rollbackTransaction()INTERNAL:
This is internal to the uow, transactions should not be used explictly in a uow.
The uow shares its parents transactions.
incrementProfile(SessionProfiler.UowRollbacks);
getParent().rollbackTransaction();
|
protected void | rollbackTransaction(boolean intendedToCommitTransaction)INTERNAL:
rollbackTransaction() with a twist for external transactions.
writeChanges() is called outside the JTA beforeCompletion(), so the
accompanying exception won't propogate up and cause a rollback by itself.
Instead must mark the transaction for rollback only here.
If internally started external transaction or no external transaction
can still rollback normally.
if (!intendedToCommitTransaction && getParent().hasExternalTransactionController() && !getParent().wasJTSTransactionInternallyStarted()) {
getParent().getExternalTransactionController().markTransactionForRollback();
}
rollbackTransaction();
|
public oracle.toplink.essentials.internal.helper.IdentityHashtable | scanForConformingInstances(oracle.toplink.essentials.expressions.Expression selectionCriteria, java.lang.Class referenceClass, oracle.toplink.essentials.internal.sessions.AbstractRecord arguments, oracle.toplink.essentials.queryframework.ObjectLevelReadQuery query)INTERNAL:
Scans the UnitOfWork identity map for conforming instances.
Later this method can be made recursive to check all parent units of
work also.
// for bug 3568141 use the painstaking shouldTriggerIndirection if set
InMemoryQueryIndirectionPolicy policy = query.getInMemoryQueryIndirectionPolicy();
if (!policy.shouldTriggerIndirection()) {
policy = new InMemoryQueryIndirectionPolicy(InMemoryQueryIndirectionPolicy.SHOULD_IGNORE_EXCEPTION_RETURN_NOT_CONFORMED);
}
IdentityHashtable indexedInterimResult = new IdentityHashtable();
try {
Vector fromCache = null;
if (selectionCriteria != null) {
// assume objects that have the compared relationship
// untriggered do not conform as they have not been changed.
// bug 2637555
fromCache = getIdentityMapAccessor().getAllFromIdentityMap(selectionCriteria, referenceClass, arguments, policy);
for (Enumeration fromCacheEnum = fromCache.elements();
fromCacheEnum.hasMoreElements();) {
Object object = fromCacheEnum.nextElement();
if (!isObjectDeleted(object)) {
indexedInterimResult.put(object, object);
}
}
}
// Add any new objects that conform to the query.
Vector newObjects = null;
newObjects = getAllFromNewObjects(selectionCriteria, referenceClass, arguments, policy);
for (Enumeration newObjectsEnum = newObjects.elements();
newObjectsEnum.hasMoreElements();) {
Object object = newObjectsEnum.nextElement();
if (!isObjectDeleted(object)) {
indexedInterimResult.put(object, object);
}
}
} catch (QueryException exception) {
if (getShouldThrowConformExceptions() == THROW_ALL_CONFORM_EXCEPTIONS) {
throw exception;
}
}
return indexedInterimResult;
|
protected void | setAllClonesCollection(oracle.toplink.essentials.internal.helper.IdentityHashtable objects)INTERNAL:
Used to set the collections of all objects in the UnitOfWork.
this.allClones = objects;
|
protected void | setCloneMapping(oracle.toplink.essentials.internal.helper.IdentityHashtable cloneMapping)INTERNAL:
Set the clone mapping.
The clone mapping contains clone of all registered objects,
this is required to store the original state of the objects when registered
so that only what is changed will be commited to the database and the parent,
(this is required to support parralel unit of work).
this.cloneMapping = cloneMapping;
|
public void | setDead()INTERNAL:
set UoW lifecycle state variable to DEATH
setLifecycle(Death);
|
protected void | setDeletedObjects(oracle.toplink.essentials.internal.helper.IdentityHashtable deletedObjects)INTERNAL:
The deleted objects stores any objects removed during the unit of work.
On commit they will all be removed from the database.
this.deletedObjects = deletedObjects;
|
protected void | setLifecycle(int lifecycle)INTERNAL:
The life cycle tracks if the unit of work is active and is used for JTS.
this.lifecycle = lifecycle;
|
public void | setMergeManager(oracle.toplink.essentials.internal.sessions.MergeManager mergeManager)INTERNAL:
A reference to the last used merge manager. This is used to track locked
objects.
this.lastUsedMergeManager = mergeManager;
|
protected void | setNewObjectsCloneToOriginal(oracle.toplink.essentials.internal.helper.IdentityHashtable newObjects)INTERNAL:
The new objects stores any objects newly created during the unit of work.
On commit they will all be inserted into the database.
this.newObjectsCloneToOriginal = newObjects;
|
protected void | setNewObjectsOriginalToClone(oracle.toplink.essentials.internal.helper.IdentityHashtable newObjects)INTERNAL:
The new objects stores any objects newly created during the unit of work.
On commit they will all be inserted into the database.
this.newObjectsOriginalToClone = newObjects;
|
public void | setObjectsDeletedDuringCommit(oracle.toplink.essentials.internal.helper.IdentityHashtable deletedObjects)INTERNAL:
Set the objects that have been deleted.
objectsDeletedDuringCommit = deletedObjects;
|
public void | setParent(oracle.toplink.essentials.internal.sessions.AbstractSession parent)INTERNAL:
Set the parent.
This is a unit of work if nested, otherwise a database session or client session.
this.parent = parent;
|
public void | setPendingMerge()INTERNAL:
set UoW lifecycle state variable to PENDING_MERGE
setLifecycle(MergePending);
|
public void | setReadOnlyClasses(java.util.Vector classes)INTERNAL:
Gives a new set of read-only classes to the receiver.
This set of classes given are checked that subclasses of a read-only class are also
in the read-only set provided.
this.readOnlyClasses = new Hashtable(classes.size() + 10);
for (Enumeration enumtr = classes.elements(); enumtr.hasMoreElements();) {
Class theClass = (Class)enumtr.nextElement();
addReadOnlyClass(theClass);
}
|
protected void | setRemovedObjects(oracle.toplink.essentials.internal.helper.IdentityHashtable removedObjects)INTERNAL:
The removed objects stores any newly registered objects removed during the nested unit of work.
On commit they will all be removed from the parent unit of work.
this.removedObjects = removedObjects;
|
public void | setResumeUnitOfWorkOnTransactionCompletion(boolean resumeUnitOfWork)INTERNAL:
Set if this UnitofWork should be resumed after the end of the transaction
Used when UnitOfWork is synchronized with external transaction control
this.resumeOnTransactionCompletion = resumeUnitOfWork;
|
public void | setShouldCascadeCloneToJoinedRelationship(boolean shouldCascadeCloneToJoinedRelationship)INTERNAL:
True if the value holder for the joined attribute should be triggered.
Required by ejb30 fetch join.
this.shouldCascadeCloneToJoinedRelationship = shouldCascadeCloneToJoinedRelationship;
|
public void | setShouldNewObjectsBeCached(boolean shouldNewObjectsBeCached)ADVANCED:
By default new objects are not cached until the exist on the database.
Occasionally if mergeClone is used on new objects and is required to allow multiple merges
on the same new object, then if the new objects are not cached, each mergeClone will be
interpretted as a different new object.
By setting new objects to be cached mergeClone can be performed multiple times before commit.
New objects cannot be cached unless they have a valid assigned primary key before being registered.
New object with non-null invalid primary keys such as 0 or '' can cause problems and should not be used with this option.
this.shouldNewObjectsBeCached = shouldNewObjectsBeCached;
|
public void | setShouldPerformDeletesFirst(boolean shouldPerformDeletesFirst)ADVANCED:
By default deletes are performed last in a unit of work.
Sometimes you may want to have the deletes performed before other actions.
this.shouldPerformDeletesFirst = shouldPerformDeletesFirst;
|
public void | setShouldThrowConformExceptions(int shouldThrowExceptions)ADVANCED:
Conforming queries can be set to provide different levels of detail about the
exceptions they encounter
There are three levels:
DO_NOT_THROW_CONFORM_EXCEPTIONS = 0;
THROW_ALL_CONFORM_EXCEPTIONS = 1;
this.shouldThrowConformExceptions = shouldThrowExceptions;
|
public static void | setSmartMerge(boolean option)INTERNAL:
Set smart merge flag. This feature is used in WL to merge dependent values without SessionAccessor
SmartMerge = option;
|
public void | setSynchronized(boolean synched)INTERNAL:
Set isSynchronized flag to indicate that this session is a synchronized unit of work.
isSynchronized = synched;
|
public void | setTransaction(java.lang.Object transaction)INTERNAL:
PERF: Set the associated external transaction.
Used to optimize activeUnitOfWork lookup.
this.transaction = transaction;
|
public void | setUnitOfWorkChangeSet(oracle.toplink.essentials.internal.sessions.UnitOfWorkChangeSet unitOfWorkChangeSet)INTERNAL:
Sets the current UnitOfWork change set to be the one passed in.
this.unitOfWorkChangeSet = unitOfWorkChangeSet;
|
protected void | setUnregisteredExistingObjects(oracle.toplink.essentials.internal.helper.IdentityHashtable newUnregisteredExistingObjects)INTERNAL:
Used to set the unregistered existing objects vector used when validation has been turned off.
unregisteredExistingObjects = newUnregisteredExistingObjects;
|
protected void | setUnregisteredNewObjects(oracle.toplink.essentials.internal.helper.IdentityHashtable newObjects)INTERNAL:
unregisteredNewObjects = newObjects;
|
public void | setValidationLevel(int validationLevel)ADVANCED:
The unit of work performs validations such as,
ensuring multiple copies of the same object don't exist in the same unit of work,
ensuring deleted objects are not refered after commit,
ensures that objects from the parent cache are not refered in the unit of work cache.
The level of validation can be increased or decreased for debugging purposes or under
advanced situation where the application requires/desires to violate clone identity in the unit of work.
It is strongly suggested that clone identity not be violate in the unit of work.
this.validationLevel = validationLevel;
|
public void | setWasNonObjectLevelModifyQueryExecuted(boolean wasNonObjectLevelModifyQueryExecuted)INTERNAL:
True if either DataModifyQuery or ModifyAllQuery was executed.
In absense of transaction the query execution starts one, therefore
the flag may only be true in transaction, it's reset on commit or rollback.
this.wasNonObjectLevelModifyQueryExecuted = wasNonObjectLevelModifyQueryExecuted;
|
public void | setWasTransactionBegunPrematurely(boolean wasTransactionBegunPrematurely)INTERNAL:
Set a flag in the root UOW to indicate that a pess. locking or non-selecting SQL query was executed
and forced a transaction to be started.
if (isNestedUnitOfWork()) {
((UnitOfWorkImpl)getParent()).setWasTransactionBegunPrematurely(wasTransactionBegunPrematurely);
}
this.wasTransactionBegunPrematurely = wasTransactionBegunPrematurely;
|
public java.lang.Object | shallowMergeClone(java.lang.Object rmiClone)PUBLIC:
Merge the attributes of the clone into the unit of work copy.
This can be used for objects that are returned from the client through
RMI serialization (or other serialization mechanisms), because the RMI object will
be a clone this will merge its attributes correctly to preserve object identity
within the unit of work and record its changes.
Only direct attributes are merged.
return mergeClone(rmiClone, MergeManager.NO_CASCADE);
|
public java.lang.Object | shallowRevertObject(java.lang.Object clone)PUBLIC:
Revert the object's attributes from the parent.
This only reverts the object's direct attributes.
return revertObject(clone, MergeManager.NO_CASCADE);
|
public void | shallowUnregisterObject(java.lang.Object clone)ADVANCED:
Unregister the object with the unit of work.
This can be used to delete an object that was just created and is not yet persistent.
Delete object can also be used, but will result in inserting the object and then deleting it.
The method will only unregister the clone, none of its parts.
unregisterObject(clone, DescriptorIterator.NoCascading);
|
public boolean | shouldCascadeCloneToJoinedRelationship()INTERNAL:
True if the value holder for the joined attribute should be triggered.
Required by ejb30 fetch join.
return shouldCascadeCloneToJoinedRelationship;
|
public boolean | shouldClearForCloseOnRelease()INTERNAL:
Indicates whether clearForClose methor should be called by release method.
return false;
|
public boolean | shouldNewObjectsBeCached()ADVANCED:
By default new objects are not cached until they exist on the database.
Occasionally if mergeClone is used on new objects and is required to allow multiple merges
on the same new object, then if the new objects are not cached, each mergeClone will be
interpretted as a different new object.
By setting new objects to be cached mergeClone can be performed multiple times before commit.
New objects cannot be cached unless they have a valid assigned primary key before being registered.
New object with non-null invalid primary keys such as 0 or '' can cause problems and should not be used with this option.
return shouldNewObjectsBeCached;
|
public boolean | shouldPerformDeletesFirst()ADVANCED:
By default all objects are inserted and updated in the database before
any object is deleted. If this flag is set to true, deletes will be
performed before inserts and updates
return shouldPerformDeletesFirst;
|
public boolean | shouldPerformFullValidation()ADVANCED:
The unit of work performs validations such as,
ensuring multiple copies of the same object don't exist in the same unit of work,
ensuring deleted objects are not refered after commit,
ensures that objects from the parent cache are not refered in the unit of work cache.
The level of validation can be increased or decreased for debugging purposes or under
advanced situation where the application requires/desires to violate clone identity in the unit of work.
It is strongly suggested that clone identity not be violate in the unit of work.
return getValidationLevel() == Full;
|
public boolean | shouldPerformNoValidation()ADVANCED:
The unit of work performs validations such as,
ensuring multiple copies of the same object don't exist in the same unit of work,
ensuring deleted objects are not refered after commit,
ensures that objects from the parent cache are not refered in the unit of work cache.
The level of validation can be increased or decreased for debugging purposes or under
advanced situation where the application requires/desires to violate clone identity in the unit of work.
It is strongly suggested that clone identity not be violate in the unit of work.
return getValidationLevel() == None;
|
public boolean | shouldPerformPartialValidation()ADVANCED:
The unit of work performs validations such as,
ensuring multiple copies of the same object don't exist in the same unit of work,
ensuring deleted objects are not refered after commit,
ensures that objects from the parent cache are not refered in the unit of work cache.
The level of validation can be increased or decreased for debugging purposes or under
advanced situation where the application requires/desires to violate clone identity in the unit of work.
It is strongly suggested that clone identity not be violate in the unit of work.
return getValidationLevel() == Partial;
|
public boolean | shouldReadFromDB()INTERNAL:
Indicates whether readObject should return the object read from the db
in case there is no object in uow cache (as opposed to fetching the object from
parent's cache). Note that wasNonObjectLevelModifyQueryExecuted()==true implies inTransaction()==true.
return wasNonObjectLevelModifyQueryExecuted();
|
public boolean | shouldResumeUnitOfWorkOnTransactionCompletion()INTERNAL:
Returns true if this UnitofWork should be resumed after the end of the transaction
Used when UnitOfWork is synchronized with external transaction control
return this.resumeOnTransactionCompletion;
|
public void | storeDeferredModifyAllQuery(oracle.toplink.essentials.queryframework.DatabaseQuery query, oracle.toplink.essentials.internal.sessions.AbstractRecord translationRow)INTERNAL:
Store the deferred UpdateAllQuery's from the UoW in the list.
if (deferredModifyAllQueries == null) {
deferredModifyAllQueries = new ArrayList();
}
deferredModifyAllQueries.add(new Object[]{query, translationRow});
|
public void | storeModifyAllQuery(oracle.toplink.essentials.queryframework.DatabaseQuery query)INTERNAL:
Store the ModifyAllQuery's from the UoW in the list. They are always
deferred to commit time
if (modifyAllQueries == null) {
modifyAllQueries = new ArrayList();
}
modifyAllQueries.add(query);
|
public void | synchronizeAndResume()INTERNAL
Synchronize the clones and update their backup copies.
Called after commit and commit and resume.
// For pessimistic locking all locks were released by commit.
getPessimisticLockedObjects().clear();
getProperties().remove(LOCK_QUERIES_PROPERTY);
// find next power-of-2 size
IdentityHashtable newCloneMapping = new IdentityHashtable(1 + getCloneMapping().size());
for (Enumeration cloneEnum = getCloneMapping().keys(); cloneEnum.hasMoreElements();) {
Object clone = cloneEnum.nextElement();
// Do not add object that were deleted, what about private parts??
if ((!isObjectDeleted(clone)) && (!getRemovedObjects().containsKey(clone))) {
ClassDescriptor descriptor = getDescriptor(clone);
ObjectBuilder builder = descriptor.getObjectBuilder();
//Build backup clone for DeferredChangeDetectionPolicy or ObjectChangeTrackingPolicy,
//but not for AttributeChangeTrackingPolicy
descriptor.getObjectChangePolicy().revertChanges(clone, descriptor, this, newCloneMapping);
}
}
setCloneMapping(newCloneMapping);
if (hasObjectsDeletedDuringCommit()) {
for (Enumeration removedObjects = getObjectsDeletedDuringCommit().keys();
removedObjects.hasMoreElements();) {
Object removedObject = removedObjects.nextElement();
getIdentityMapAccessor().removeFromIdentityMap((Vector)getObjectsDeletedDuringCommit().get(removedObject), removedObject.getClass());
}
}
// New objects are not new anymore.
// can not set multi clone for NestedUnitOfWork.CR#2015 - XC
if (!isNestedUnitOfWork()) {
//Need to move objects and clones from NewObjectsCloneToOriginal to CloneToOriginals for use in the continued uow
if (hasNewObjects()) {
for (Enumeration newClones = getNewObjectsCloneToOriginal().keys(); newClones.hasMoreElements();) {
Object newClone = newClones.nextElement();
getCloneToOriginals().put(newClone, getNewObjectsCloneToOriginal().get(newClone));
}
}
setNewObjectsCloneToOriginal(null);
setNewObjectsOriginalToClone(null);
}
//reset unitOfWorkChangeSet. Needed for ObjectChangeTrackingPolicy and DeferredChangeDetectionPolicy
setUnitOfWorkChangeSet(null);
// The collections of clones may change in the new UnitOfWork
resetAllCloneCollection();
// 2612538 - the default size of IdentityHashtable (32) is appropriate
setObjectsDeletedDuringCommit(new IdentityHashtable());
setDeletedObjects(new IdentityHashtable());
setRemovedObjects(new IdentityHashtable());
setUnregisteredNewObjects(new IdentityHashtable());
//Reset lifecycle
this.lifecycle = Birth;
this.isSynchronized = false;
|
protected void | undeleteObject(java.lang.Object object)INTERNAL:
THis method is used to transition an object from the deleted objects list
to be simply be register.
getDeletedObjects().remove(object);
if (getParent().isUnitOfWork()) {
((UnitOfWorkImpl)getParent()).undeleteObject(object);
}
|
public void | unregisterObject(java.lang.Object clone)PUBLIC:
Unregister the object with the unit of work.
This can be used to delete an object that was just created and is not yet persistent.
Delete object can also be used, but will result in inserting the object and then deleting it.
The method will only unregister the object and its privately owned parts
unregisterObject(clone, DescriptorIterator.CascadePrivateParts);
|
public void | unregisterObject(java.lang.Object clone, int cascadeDepth)INTERNAL:
Unregister the object with the unit of work.
This can be used to delete an object that was just created and is not yet persistent.
Delete object can also be used, but will result in inserting the object and then deleting it.
// Allow register to be called with null and just return true
if (clone == null) {
return;
}
//CR#2272
logDebugMessage(clone, "unregister");
Object implementation = getDescriptor(clone).getObjectBuilder().unwrapObject(clone, this);
// This define an inner class for process the itteration operation, don't be scared, its just an inner class.
DescriptorIterator iterator = new DescriptorIterator() {
public void iterate(Object object) {
if (isClassReadOnly(object.getClass(), getCurrentDescriptor())) {
setShouldBreak(true);
return;
}
// Check if object exists in the IM.
Vector primaryKey = getCurrentDescriptor().getObjectBuilder().extractPrimaryKeyFromObject(object, UnitOfWorkImpl.this);
// If object exists in IM remove it from the IM and also from clone mapping.
getIdentityMapAccessorInstance().removeFromIdentityMap(primaryKey, object.getClass(), getCurrentDescriptor());
getCloneMapping().remove(object);
// Remove object from the new object cache
// PERF: Avoid initialization of new objects if none.
if (hasNewObjects()) {
Object original = getNewObjectsCloneToOriginal().remove(object);
if (original != null) {
getNewObjectsOriginalToClone().remove(original);
}
}
}
};
iterator.setSession(this);
iterator.setCascadeDepth(cascadeDepth);
iterator.startIterationOn(implementation);
|
public void | updateChangeTrackersIfRequired(java.lang.Object objectToWrite, oracle.toplink.essentials.internal.sessions.ObjectChangeSet changeSetToWrite, oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl uow, oracle.toplink.essentials.descriptors.ClassDescriptor descriptor)INTERNAL:
This method is used internally to update the tracked objects if required
//this is a no op in this unitOfWork Class see subclasses for implementation.
|
public void | validateObjectSpace()ADVANCED:
This can be used to help debugging an object-space corruption.
An object-space corruption is when your application has incorrectly related a clone to an original object.
This method will validate that all registered objects are in a correct state and throw
an error if not, it will contain the full stack of object references in the error message.
If you call this method after each register or change you perform it will pin-point where the error was made.
log(SessionLog.FINER, SessionLog.TRANSACTION, "validate_object_space");
// This define an inner class for process the itteration operation, don't be scared, its just an inner class.
DescriptorIterator iterator = new DescriptorIterator() {
public void iterate(Object object) {
try {
if (isClassReadOnly(object.getClass(), getCurrentDescriptor())) {
setShouldBreak(true);
return;
} else {
getBackupClone(object);
}
} catch (TopLinkException exception) {
log(SessionLog.FINEST, SessionLog.TRANSACTION, "stack_of_visited_objects_that_refer_to_the_corrupt_object", getVisitedStack());
log(SessionLog.FINER, SessionLog.TRANSACTION, "corrupt_object_referenced_through_mapping", getCurrentMapping());
throw exception;
}
}
};
iterator.setSession(this);
for (Enumeration clonesEnum = getCloneMapping().keys(); clonesEnum.hasMoreElements();) {
iterator.startIterationOn(clonesEnum.nextElement());
}
|
public boolean | wasNonObjectLevelModifyQueryExecuted()INTERNAL:
True if either DataModifyQuery or ModifyAllQuery was executed.
return wasNonObjectLevelModifyQueryExecuted;
|
public boolean | wasTransactionBegunPrematurely()INTERNAL:
Indicates if a transaction was begun by a pessimistic locking or non-selecting query.
Traverse to the root UOW to get value.
if (isNestedUnitOfWork()) {
return ((UnitOfWorkImpl)getParent()).wasTransactionBegunPrematurely();
}
return wasTransactionBegunPrematurely;
|
public void | writeChanges()ADVANCED: Writes all changes now before commit().
The commit process will begin and all changes will be written out to the datastore, but the datastore transaction will not
be committed, nor will changes be merged into the global cache.
A subsequent commit (on UnitOfWork or global transaction) will be required to finalize the commit process.
As the commit process has begun any attempt to register objects, or execute object-level queries will
generate an exception. Report queries, non-caching queries, and data read/modify queries are allowed.
On exception any global transaction will be rolled back or marked rollback only. No recovery of this UnitOfWork will be possible.
Can only be called once. It can not be used to write out changes in an incremental fashion.
Use to partially commit a transaction outside of a JTA transaction's callbacks. Allows you to get back any exception directly.
Use to commit a UnitOfWork in two stages.
if (!isActive()) {
throw ValidationException.inActiveUnitOfWork("writeChanges");
}
if (isAfterWriteChangesButBeforeCommit()) {
throw ValidationException.cannotWriteChangesTwice();
}
if (isNestedUnitOfWork()) {
throw ValidationException.writeChangesOnNestedUnitOfWork();
}
log(SessionLog.FINER, SessionLog.TRANSACTION, "begin_unit_of_work_commit");
getEventManager().preCommitUnitOfWork();
setLifecycle(CommitPending);
try {
commitToDatabaseWithChangeSet(false);
} catch (RuntimeException e) {
setLifecycle(WriteChangesFailed);
throw e;
}
setLifecycle(CommitTransactionPending);
|
public void | writesCompleted()INTERNAL:
This method notifies the accessor that a particular sets of writes has
completed. This notification can be used for such thing as flushing the
batch mechanism
getParent().writesCompleted();
|