Product Upgrade Recommendations
When you upgrade an AB Suite MCP Runtime application from one product release to another, the recommended way is to keep the MCP release level the same during the upgrade.
If it is not possible to keep the MCP release level the same during the product upgrade and it is necessary to use a higher MCP release level for the application deployment than the MCP release level used with the older product, you must follow these steps to prepare the database for the product upgrade.
Identify the current DMSII release that is used by the AB Suite application. For example, you may be using DMSII 56.1 with the current AB Suite product.
Identify the new DMSII release that will be used with the new AB Suite product. For example, you may want to use DMSII 60.0 with the new AB Suite product.
Determine the number of database upgrades you need to perform in order to reach the MCP level that you will use with the new AB Suite product. If you need more than one database upgrade to reach the new MCP release level, determine which DMSII release level should be used for the interim database upgrade.
In this example scenario, you are using DMSII 56.1 with the current product. You want to use DMSII 60.0 with the new AB Suite product. Since DMSII limits database upgrades to 2 release levels, going from current release to the new release, the best DMSII release level for the interim upgrade is DMSII 58.1. As such, 2 database upgrades are needed to bring the example database up to the desired DMSII release level.
If you need to do an interim database upgrade, install the DMSII software with the release level for the interim database upgrade, otherwise go to step 6.
Upgrade the database using NGEN28/DMU.
Install the DMSII software with the release level matching the MCP release that will be used with the new AB Suite product.
Upgrade the database using NGEN28/DMU.
After the database has been upgraded, you can deploy the application.
Note: Where it is possible, use the version of NGEN28/DMU that is compatible with the DMSII release level.
Using Binary Large Objects (BLOBs) in MCP Runtime
MCP Runtime supports two types of BLOBs (Binary Large Objects).
External BLOB
This is a string primitive object based on the Glb.BinaryFile template that contains a reference to an external binary file. Its IsExternal property is set to True. If it is marked as persistent in a class, the path is stored in the DMSII database as a DMSII recognized data type.
Internal BLOB
This is a persistent binary primitive object that contains a binary file. The file is stored in the DMSII database as a DMSII recognized data type.
This allows you to manage binary objects, such as photos, documents and other binary files, with your MCP Runtime application.
External BLOBs and Internal BLOBs are special. You cannot use regular verbs with them. You might only use the BLOB built-in methods to interrogate or update BLOBs.
Creating a BLOB
Non-persistent External BLOB
A non-persistent External BLOB might be created as an attribute of a class, segment or report, or as a variable in a method.
To create a non-persistent External BLOB:
Add an attribute.
Select Framework.BinaryFile as the template for the attribute.
By setting the template for this attribute to BinaryFile, all essential properties will default to the values required for an External BLOB.
Enter a name for the attribute.
Click Done.
Persistent External BLOB
A persistent External BLOB might be created as an attribute of a class only, because it is stored in the database.
To create a persistent External BLOB:
Add an attribute.
Select Framework.BinaryFile as the template for the attribute.
By setting the template for this attribute to BinaryFile, all essential properties will default to the values required for an External BLOB.
Enter a name for the attribute.
Click Done.
Change IsPersistent to Yes.
Internal BLOB
An Internal BLOB might be created as an attribute of a class only, because it is stored in the database.
To create an Internal BLOB:
Add an attribute.
Enter a name for the attribute.
Click Done.
Change the Primitive property to Binary.
By setting the Primitive of this attribute to Binary, all essential properties will default to the values required for an Internal BLOB.
Moving Data between BLOBs
Data is moved between BLOBs using specific BLOB built-in methods. When you are using these methods, you must remember that an External BLOB holds the path names to binary objects only, but Internal BLOBS actually hold the binary object.
Here is a table of the available built-in methods:
Method | Description |
---|---|
<external BLOB>.GetPath() | Returns the contents of the internal Path Buffer of <external BLOB>. For a non-persistent External BLOB, the GetPath method returns a path name if the SetPath method was invoked for the external BLOB previously. For a persistent External BLOB, the GetPath method will return the value in the <Path> DB buffer if a lookup for the external BLOB’s record had been executed previously. |
<external BLOB>.SetPath(<path>) | Places the string <path> in the <External Blob> database buffer or local variable. A subsequent Store or Update will persist the path as a string in the database for the persistent External BLOB. |
<external BLOB>.Read(<internal BLOB>) | Places the contents of the file specified in <path> into the <Internal Blob> DB buffer. A subsequent Store or Update will persist the file as a BLOB in the database. Note that the file specified must have an EXTMODE of either ASCII or EBCDIC. Any other EXTMODE value will return GLB.STATUS of ***** and an appropriate message in GLB.DBTEXT. Refer to “Error Values of GLB.DBTEXT” for more information on GLB.DBTEXT error messages. |
<external BLOB>.Write(<internal BLOB>) | Retrieves the contents of <Internal Blob> and creates a file that has the path name specified by <External Blob>. You must assign a path to <External BLOB> and do a look up of the database record for <Internal BLOB> before invoking the <external BLOB>.Write() method. |
<BLOB>.GetLength() | This method can be used with both the External BLOB and the Internal BLOB. For an External BLOB, the GetLength method returns the length of the data on disk (in bytes). For an Internal BLOB, the GetLength method returns the It is used for getting the length of the data persisted in the BLOB (in bytes). Before invoking the <BLOB>.GetLength() method, you must have assigned a path to the External BLOB or done a look up of the database record for the Internal BLOB. |
<BLOB>.Clear() <BLOB>.Initialize() | This method can be used with both the External BLOB and the Internal BLOB. For a non-persistent External BLOB, the Clear method deletes the BLOB path. For a persistent External BLOB, the Clear method deletes the persisted path at the next store or update. For an Internal BLOB, the Clear method deletes the BLOB at the next store or update. |
Notes:
All of the BLOB built-in methods that are used for moving a binary object to a record in the database are associated with the External BLOB (an attribute using the BinaryFile template).
To move a binary object to an Internal BLOB (an attribute whose primitive is Binary), you must have declared at least one External BLOB.
GLB.STATUS must always be checked after invoking any of the above built-in methods.
Assignments involving BLOBs
Assignments involving BLOBs can only use like types. You can assign an internal BLOB to an internal BLOB, or assign an external BLOB to an external BLOB, but you cannot assign an external BLOB to an internal BLOB or vice versa.
These assignments are valid.
<external BLOB1> := <external BLOB2> <internal BLOB1> := <internal BLOB2>
Storing BLOBs in the Database
To store a binary file in the database as an External BLOB:
Assign the pathname of a binary directly to a persistent External BLOB.
This assigns a path directly to the persistent External BLOB.
<class>.<ExternalBLOB>.SetPath("<BLOB path>") DBLOB.ExtBLOB2.SetPath("(MYUSER)LOCAL/TXT/FILE ON MYPACK")
Assign the pathname of a binary indirectly to a persistent External BLOB.
This assigns a path to the persistent External BLOB via a non-persistent ExternalBLOB.
ExtBLOB1.SetPath(“(MYUSER)LOCAL/TXT/FILE ON MYPACK”) If glb.status = glb.spaces LookUp Key DBLOB.Profile1 If glb.status = glb.spaces DBLOB.ExtBLOB2 := ExtBLOB1 else Sm odt “DB Error:” &+ glb.dbtext end Else Sm odt “Error:” &+ glb.dbtext end
To store a binary file in the database as an Internal BLOB:
Internal BLOBs cannot be accessed directly. The path name of the binary file must be assigned to an External BLOB first, then the file is loaded into the internal BLOB from the external BLOB.
ExtBLOB1.SetPath(“(MYUSER)PHOTO ON MYPACK”) If glb.status = glb.spaces LookUp Key DBLOB.Profile1 If glb.status = glb.spaces ExtBLOB1.Read(DBLOB.IntBLOB1) end end
Retrieving BLOBs from the Database
To retrieve a binary file from an External BLOB in the database:
Retrieve the pathname of a binary directly from a persistent External BLOB.
This retrieves a path directly from the persistent External BLOB
<class>.<ExternalBLOB>.GetPath() DBLOB.ExtBLOB2.GetPath()
Retrieve the pathname of a binary indirectly from a persistent External BLOB.
This retrieves a path from the persistent External BLOB through a non-persistent External BLOB.
LookUp Key DBLOB.Profile1 If glb.status = glb.spaces DBLOB.ExtBLOB2.GetPath() If glb.status = glb.spaces ExtBLOB1 := DBLOB.ExtBLOB2 end end
To retrieve a binary file from an internal BLOB in the database:
Internal BLOBs cannot be accessed directly. The path name of the binary file must be assigned to an External BLOB first, then the file is loaded into the internal BLOB from the external BLOB.
ExtBLOB1.SetPath(“(MYUSER)PHOTO ON MYPACK”) If glb.status = glb.spaces LookUp Key DBLOB.Profile1 If glb.status = glb.spaces ExtBLOB1.Write(DBLOB.IntBLOB1) end end
Error Values for Glb.DBText
Glb.DBText error | Meaning |
---|---|
Invalid Blob Name | The item name supplied does not belong to a valid Lob. Check that the file exists and is visible or check that the name is correct. |
Missing Blob File Title | The operation requires a non-blank MCP file name Enter a valid MCP file name or change the operation. |
Blob File not Resident | The requested file is not present on the Host. |
Invalid Blob File Title | The Lob file title is not a valid MCP file name. |
Cannot open file - check file attributes compatibility | The file could not opened. Check the file attributes to ensure compatibility. For example, EXTMODE |
Performance Considerations for ROC
In large applications where NOFTLOCK is enabled on the ROC dataset structures and a large number of ROC reports are executed, a performance degradation of <system name/ROC> might occur. Consequently, the MIP consumption of ROC might increase significantly.
The reason for setting NOFTLOCK on the ROC set structures is to avoid deadlocks in ROC. However, the downside of doing this is that it requires increased maintenance of the ROC structures. Therefore, do not set NOFTLOCK on the ROC set structures, unless it is necessary.
The consequence of setting NOFTLOCK on the ROC set structures is that, the set entries are not physically deleted when the data set records are deleted, they are only logically deleted.
It is necessary to do a manual garbage collection on a regular basis to remove the set entries because deleted set records will be physically present. As the number of logically deleted set records increase within the ROC sets, the performance of ROC worsens.
To address the performance problem, perform one of the following:
Increase the frequency of online garbage collections (VDBS GC) of GLB-ROC-H-SSET. (For example, one site must do it on a daily basis.) This can become onerous.
Delete records through the GLB-ROC-H data set, rather than the GLB-ROC-H-SSET set. This reduces the performance impact of NOFTLOCK on the ROC set structures.
However, it is still necessary to initiate regular online garbage collections (VDBS Garbage Collection), but they can be done less frequently (possibly on a weekly basis).
The ROCSUPPORT library is capable of executing in one of two modes for its cleanup processing through:
GLB-ROC-H-SSET set (default)
GLB-ROC-H data set(optional)
At system startup, the ROCSUPPORT library determines the cleanup mode to be used by interrogating the USERINFO attribute of (<app usercode>)<system name>/ROC.
Here is an explanation of how ROCSUPPORT conducts the cleanup of ROC depending on the setting of the USERINFO attribute on (<app usercode>)<system name>/ROC.
USERINFO = 0 (default): ROCSUPPORT will cleanup ROC records through the GLB-ROC-H-SSET set, which is the existing ROC cleanup process.
USERINFO = 1 (optional): ROCSUPPORT will cleanup ROC records through the GLB-ROC-H data set, which is the special ROC cleanup process for when NOFTLOCK is set on the ROC-H-SSET set.
You can set the USERINFO attribute on either NGENxx/ROC in the Runtime environment or on the <system>/ROC file of the application. To start with, it is recommended that you set USERINFO on both.
To set the USERINFO value on NGENxx/ROC in the Runtime environment, use the ALTER command. You must do this after installing each AB Suite release. This ensures that the USERINFO setting will be present on the <system>/ROC file after a Build or Rebuild, of the application.
ALTER (<Runtime usercode>)NGENxx/ROC ON <Runtime Pack> (USERINFO=1)
To set the USERINFO value on <system>/ROC, use the ALTER command. If you have not set USERINFO to 1 on NGENxx/ROC in the Runtime, you must set USERINFO on <system>/ROC after every system build.
ALTER (<app usercode>)<system name>/ROC on <app object pack> (USERINFO=1)
Notes:
When the USERINFO value on ROC is changed, you must restart the application so that ROCSUPPORT can detect the change in the value.
You must reapply the USERINFO value after each Interim Correction (IC) upgrade.
There are no adverse effects to applications running with USERINFO set to 1 on ROC and NOFTLOCK not being set for the ROC sets.
Warning Message for Ispecs with Direction that have not been painted
The following warning message appears when generating an ispec with an attribute that has direction but has not yet been physically painted and saved:
“Warning: Class <class name> contains at least one attribute that has direction but has not yet been painted. This can cause corruption of the screen at runtime.
We strongly recommend opening this ispecs painter, making a small change, save, then generate the system again.”
It is recommended that the painter should be opened for the ispec for which this message is issued and a small change should be made. Save the change then generate the system again.
Database Recovery
To be prepared for any database recovery, it is recommended to back up the database regularly. You should back up the system while backing up the database.
There are two options for a whole database recovery:
ROLLBACK – This method moves the whole database backward in time, starting with the current database, and then applying audit trail before images to move the database back to a specified point in time.
REBUILD – This method moves the whole database forward in time, starting with the reloading of an earlier version of the database from a dump, and then applying audit trail after images to move the database forward.
To recover the database using the ROLLBACK option, you need to know the point in time to which the database can be rolled back. For AB Suite applications, a valid restart point is created for every report execution (excluding co-routines), and update and access of public segment method library. The restart point (time) is displayed when the task commences, providing an accurate and reliable rollback point (should a rollback be necessary). The format of the restart point (display) is used as the time parameter in the ROLLBACK command. Following is an example of a display from a valid restart point.
"ENV: ROLLBACK TIMESTAMP IS AUGUST 16, 2011 at 15:55:05.080"
Before any database recovery, it is recommended that you backup the audit files. This makes it possible to do a REBUILD from a backup if it is necessary.
It is recommended that you do a SHOW ROLLBACK before actually doing a rollback. This allows you to verify the results of the rollback before performing the rollback. For example:
RUN $*SYSTEM/DMUTILITY("DB= (<user>) <database> ON <pack> RECOVER (SHOW ROLLBACK TO LEQ <display from a valid restart point>)")
For example,
RUN $*SYSTEM/DMUTILITY("DB= (MYUSER) MYDB ON MYPACK RECOVER (SHOW ROLLBACK TO LEQ AUGUST 16, 2011 at 15:55:05.080)")
After verifying the recovery results and backing up the audit files, proceed with the rollback:
RUN $*SYSTEM/DMUTILITY("DB= (<user>) <database> ON <pack> RECOVER (ROLLBACK TO LEQ <display from a valid restart point>)")
For example,
RUN $*SYSTEM/DMUTILITY("DB= (MYUSER) MYDB ON MYPACK RECOVER (ROLLBACK TO LEQ AUGUST 16, 2011 at 15:55:05.080)")
Note: These instructions do not replace the instructions in the Enterprise Database Server for ClearPath MCP Utilities Operations Guide for recovering a database. Refer to the instructions for recovering a database in Section 8 of the Enterprise Database Server for ClearPath MCP Utilities Operations Guide before attempting to recover your database.
Manage Reports with Database Rollback Capability
Reports (excluding co-routines) that are generated with this release of MCP Runtime force a syncpoint at the time of the first database update, creating a control record to which the database can be rolled back in the event of a problem. The time at which the syncpoint is taken appears in the report (refer to the details in Database Recovery explained earlier). The syncpoint is usually taken before ‘user logic’ starts.
There might be some performance degradation in reports as they wait for the syncpoint to occur. Syncpoints can only be taken at a quiet point in database activity. This requires all reports and the online to be out of transaction state. Therefore, the implementation of this fix might result in some performance degradation as reports wait for a syncpoint to be taken. It is for this reason that syncpoints are not taken for Co-routines. Co-routines might be called from the online and remain in the mix after the call is finished. Consequently, taking syncpoints for Co-routines could cause the system to hang, and therefore, is not done.
The following table describes where contention in the system can occur and how you can avoid it or minimize it.
Areas of Potential Performance Degradation or Contention | Avoiding Performance Degradation or Contention |
---|---|
A report is started from an ispec while it is in transaction state. The report could be started from the ispec’s own method or a segment method called by the ispec. The report’s “user logic” will not start until the ispec’s cycle has finished and it comes out of transaction state. There is contention between the online and the report in this scenario if the ispec executes logic that “waits” for the report to do something or completes. | If reports are to be run from an ispec, it is recommended that they be run from the ispec’s Prepare() method before any database updates are done. If it is not possible to avoid running a report from an ispec while it is in transaction state and the report must start running before the ispec completes its cycle, converting the report to a co-routine. |
Reports takes a long time to start if ispecs and reports remain in transaction state for long periods of time. There is contention between the online and reports if either remains in transaction state for long periods of time and want to access the same records. | Use critical points and the sleep verb in report logic to reduce time spent in transaction state. Where it is possible, reserve database updates until the end of the ispec cycle to reduce the time spent in transaction state. |
FTP Idle Limit
For MCP hosts to which AB Suite MCP applications are deployed, it is recommended to disable the FTP idle limit. Doing so ensures that builds proceed to completion and do not terminate as a result of FTP idle timeout. For example:
NA FTP SERVER_IDLE_LIMIT DISABLED
FTP might be improved by reducing the number of FTP threads used with the build. In AB Suite Developer, click Tools menu and then select Options. Select System Modeler > Builder > MCP. Change the Number of AsynchronousFTP thread to 15. Reducing this value reduces the number of files being prepared and transferred to the host at the same time. Adjust this value to suit your environment.
Do Not Use the DBATools Analyzer During a Database Reorganization
If you use DBATools Analyzer, terminate it before starting a build that includes a reorganization of the database. Additionally, ensure not to restart the build until it is complete. DBATools opens the database in a way that it holds the control file open. If DBATools is active when a reorganization of the database starts, the reorganization does not proceed.
Select the Correct DUMPINFO Files for the Runtime Environment
While selecting the SSR level of the DUMPINFO files during the installation of the IC, select the SSR level that matches the SSR level of the DMALGOL compiler that is used by builds from runtime environment being upgraded.
Select the Correct SMU, DMU and CFG Code Files for Your Applications
This release includes SMU, TRANSFER, DMU, and CFG code files with codeversions corresponding with each of the MCP SSR levels supported by AB Suite MCP Runtime. While selecting the SSR level of these files during the installation of the IC, select the SSR level that matches the SSR level of the DMSII software that is used to maintain the database of the applications built by the runtime environment being upgraded.
If there are AB Suite application databases of different DMSII release levels, and some use 55.1 and some use 56.1, for example, then it is strongly recommended that separate runtime environments be created to handle the different levels. Designate one runtime for SSR 55.1 and one for SSR 56.1.
When upgrading your application database to a new DMSII SSR level, you can either
Upgrade the SMU, TRANSFER, DMU, and CFG code files in the runtime environment currently used by the application
Build the application with a different runtime environment, which has the correct SMU, TRANSFER, DMU, and CFG code files installed for the new DMSII release level of the application database.
If you choose the second option, you must build your application with the Rebuild option for the first build with the new runtime environment.
MCP Runtime Transfer and Database Reorganizations
It is recommended that you perform a runtime transfer to the target system after all reorganizations of the source system’s database. Allowing multiple reorganizations of the source system’s database to occur before performing a runtime transfer increases the risk of complications with the reorganization of the target system’s database.
The Reorganization Type used with the reorganization of the target system’s database is the same as the one used with the last reorganization of the source system’s database. If multiple reorganizations of the source system’s database occur before a transfer to the target system is done, it is possible for the Reorganization Type used with the reorganization of the target system’s database to be inappropriate.
If it is necessary to perform multiple reorganizations of the source system’s database before performing a runtime transfer to the target system, ensure that all reorganizations use the same Reorganization Type.
If you use the REORGDB Reorganization Type, you must pay attention to changes to the Reorganization type during the deployment phase. For more information, refer to Detection of REORGDB usage with non-XE structures.
Controlling Generations and Reorganizations
Detection of REORGDB usage with non-XE structures
For a system build where a reorganization of the database is expected, the reorganization fails if the Reorganization Type is REORGDB and there are changed persistent attributes in a structure that has Extended Edition set to False. To avoid a failed reorganization, this condition is detected during the deployment phase before the reorganization occurs. If the condition is detected, you are prompted to respond with the action to be taken. This table contains the valid responses and the action that is taken.
Accept response | Action taken |
---|---|
AX DS | Terminate the deployment. |
AX ONLINE | Continue with the deployment and perform an ONLINE reorganization. |
AX OFFLINE | Continue with the deployment and perform an OFFLINE No Post Dump reorganization. |
AX IGNORE | Continue with the deployment and perform a database reorganization with REORGDB, which would be expected to fail. |
Note: If an alternative reorganization type is chosen, it is stored in the LINCCNTL file. The impact this has on MCP Runtime Transfer is that the alternative reorganization type is used for the reorganization of the target system’s database, if a transfer is done before the next reorganization of the source system’s database. For this reason, it is recommended that you perform a runtime transfer after all reorganizations of the source system database to ensure that the reorganization type is appropriate for the reorganization of the target system’s database. It is also recommended that the settings for Extended Edition on ispec and Classes with no stereotype be kept in sync for the source and target configurations. Refer to MCP Runtime Transfer and Database Reorganizations for recommendations on when to perform a runtime transfer after a reorganization to the source system’s database.
Discontinuation of ADHOC
The alternative methods of database queries include the following:
You can write reports or use Access External to query the database.
You can use DMSQL through calls to a user-written library to perform database queries.
Managing Station Records in GLB-DIALOGINFO Dataset
GLB-DIALOGINFO is a direct dataset. When a record is deleted the space occupied by the record is not released if there is a record in use after the deleted record. Consequently, a direct dataset can consume more space despite comprising a small number of records.
To address this issue, the DBILIBRARY holds information about the available records in GLB-DIALOGINFO, which is stored in a table. When a new record is required for a station, DBILIBRARY finds the next available GLB-DIALOGINFO record from the table.
The size of the table puts a limitation on the number records that can be reused. Therefore, it is necessary to continue to manage the cleanup of GLB-DIALOGINFO records. The cleanup must occur when the least number of stations are active to ensure that a maximum number of unused records are deleted for NOF stations.
You can use the :DCS command to specify the hour at which the GLB-DIALOGINFO dataset is to be cleaned.
The range is 0 to 23. By default, the value is 23.
You must choose a time when the least number of stations are active.
You can use the :DCA command to specify the maximum number of stations that can be active when the cleanup occurs.
The maximum value that can be specified is 99. By default, the value is 0.
You must choose a maximum value that suits the application. The purpose of this value is to postpone a cleanup until the number of active stations fall within this threshold. A value of zero causes the threshold to be ignored and a cleanup to be done at the time specified by :DCS.
The cleanup of the GLB-DIALOGINFO is triggered when the current hour matches the time specified by the :DCS command. The cleanup does not proceed when the number of active stations is above the maximum limit set by the :DCA command.
If a cleanup does not occur within the hour, the cleanup goes ahead and is performed later.
If a cleanup does not occur within the specified hour, you can perform either of the following or both:
Change the DIALOGINFO Cleanup Start Time (:DCS)
Increase the DIALOGINFO Cleanup Active Station Threshold (:DCA)
A cleanup of the GLB-DIALOGINFO dataset also occurs at system startup and at system shutdown. The primary purpose of the cleanup at startup is to establish the table, if required. The cleanup at shutdown removes all records in the GLB-DIALOGINFO dataset, except the control record. If the size of the GLB-DIALOGINFO becomes unmanageable, the GLB-DIALOGINFO dataset can be cleaned completely by shutting down the system. You must restart the system after the system is shut down completely. There is no need to initialize GLB-DIALOGINFO.
Preparing for Large Database Populations
DMSII (Enterprise Database Server for ClearPath MCP) allows data set and set populations of up to 545,755,813,887 records. This release of Agile Business Suite supports this DMSII limit for ispecs (standard) and profiles.
DMSII requires that you section any data set or set that has a population exceeding 268,435,455 records. Very large populations of data sets with large records also needs areas and areasize to be specified.
If you are planning to increase the expected number of a class or ispec beyond 268,435,455, it is recommended that you perform the following before building the system with the change:
Create a DWORD registry key called ActivateVSS2blocking in the following location, if it is not already present:
For 32-bit: HKEY_LOCAL_MACHINE?\SOFTWARE\Unisys\System Modeler\Features\Builder
For 64-bit: HKEY_LOCAL_MACHINE?\SOFTWARE\Wow6432Node\Unisys\System Modeler\Features\Builder
Set the Optimize Blocksize to VSS-2 configuration property on the class or ispec to True.
Set the Number of Sections configuration property on the class or ispec to a value that is greater than 1.
If the ispec or class has a key, ensure that the profile for the key is sectioned.
If there are profiles defined for the ispec or class, ensure that these are sectioned too.
Using the Log Acess DMVerbs Configuration Property
Invoking Log Access DMVerbs
To use the Log Access DMVerbs property, set the Log Access property to True.
Log Access DMVerbs will be ignored when the Log Access property to False
Log Access DMVerbs Syntax
When entering the syntax for the Log Access DMVerbs property, there are four options.
<spaces>
ALL
ALL EXCEPT <DMVerb list>
<DMVerb list>
The format of <DMVerb list> is:
(<DMVerb1>[<DMVerb2>[<DMVerb3>......]])
The list of DMVerbs used by AB Suite MCP Runtime are:
ASSIGN
ASSIGNLOB
CREATESTORE
DELETE
DELETELOB
FIND
FINDLOB
FREE
LOCK
LOCKSTORE
SECURE
The list of DMVerbs that will never be used by AB Suite MCP Runtime are:
GENERATE
INSERT
REMOVE
Notes:
ALL is the default setting for LOGACCESSDMVERBS. Nothing will be generated in the DASDL for LOGACCESSDMVERBS when the value of the Log Access DMVerbs property is spaces or “ALL”.
You must enter valid syntax for the LOGACCESSDMVERBS option.
Changes to the LOGACCESS or LOGACCESDMVERB DMSII options requires the database to be down.
Example 1
(DELETE FIND FREE LOCK)
Example 2
ALL EXCEPT (GENERATE INSERT REMOVE)
Using RATL with Locum Safe and Secure
When you use Locum Safe and Secure to manage passwords for the usercodes on your MCP server and use one of the Component Enabler clients to access your application through RATL, it attempts to log in up to three times after you have changed the login password of a user code with RATL.
Sometimes RATL might submit a login request even before Locum completes the usercode password change. This results in a login failure error.
If the first login attempt fails, RATL tries to log in for another two times. There is a 0.2 second delay before each retry attempt. By the third login attempt, Locum should have completed the usercode password change.
You can change the delay time by submitting the following HI command to RATL:
<RATL Mix#>HI nnnn
Where, nnnn represents thousandths of a second.
For example, 1000 is 1.000 seconds, 0200 is 0.200 seconds, and 0030 is 0.030 seconds.
You can set the delay time to any value up to 3000, which is 3 seconds. If you submit a value greater than 3000, RATL resets the value to 3000.