SCHEDULE
Automated Job Submission System
Guide and Reference Manual


Previous Contents Index

2.3.9 Access controls

Each job and each directory can have a set of protection settings to control who can access the information. The access information that can be associated with each job is listed below.
Data Description
Owner This field indicates the UIC that controls the job. This can be either an IDENTIFIER or a UIC code.
Protection This field is a bitmap that grants access to the user, group and other classes. The protection bits are used as follows:

r - Read access allowed to the job
w - Write access allowed to the job
x - Submit, initiate or prerequisite access allowed
The access modes are nested as follows: w -> r -> x. In other words w access gives you w, r and x access.

For example, to allow everyone on the system to examine your job but make no changes, grant read access at the world level. To do this just issue the following command.


 
 
Schedule> chjob my_job -general=protection=world:r 
 
 

For example, to grant a specific user (SMITH in this example) read access to your job via an ACL entry just issue the following command.


 
 
Schedule> chjob my_job -general=acl=(identifier=smith,access=read) 
 
 

To remove an access control list use the following command.


 
 
Schedule> chjob my_job -general=noacl 
 
 

To view the access information use the following command.


 
 
Schedule> ls -general=security my_job 
 
 
 
Job directory /johnson/ 
 
  my_job/johnson/       S:RWEDC,O:RWEDC,G:RW,W:R 
  (IDENTIFIER=[SMITH],ACCESS=READ) 
 

2.4 Configuring Schedule

Descriptions of how to set up and configure some of the basic operations of the SCHEDULE system are described in the following sections.

2.4.1 Scheduling groups

A group is all the processes and data files that are needed to perform any scheduling function. Up to ten completely independent SCHEDULE Systems can be running on any one computer system. When the distribution tape is loaded a group 0 is automatically created. Each copy is given a group number between 0 and 9. A group 0 process can talk to any other group 0 process in the cluster or network. Each group is totally independent there is no way 10 data files and executable images. This separation is very useful, if, for example, both a development group and a production group are using the same machine and total separation is required.

To create a new group 1, use the below commands.


 
 
$ /sys$manager/schedule_startup new_group 1 
 
 
The following command should be issued on all nodes of the cluster: 
 
 
 
$ /sys$manager/schedule_startup boot 1 
 
 

To enter the standard definitions into this new group use the below commands.


 
 
$ setenv schedule_group_number 1 
 
 
$ /schedule_library/schedule_standard_procs 
 
 

When a new version of the software has been received to update the group files use the below commands.


 
 
$ /sys$manager/schedule_startup update_group 1 
 
The following command should be issued on all nodes of the cluster: 
 
 
$ /sys$manager/schedule_startup newversion 1 
 
 

The logical SCHEDULE_GROUP_NUMBER is used to define which server group to communicate with. If a different group number is desired it can be defined system wide for a default database. To override the default define the logical process wide and re-execute schedule_login.

After creating the new group, run schedule_standard_procs to put default jobs and calendars in the new database.

2.4.2 Server functions

The SCHEDULE System is design as a client/server processing pair. All the foreground user interfaces connect to the server process to perform all primary functions. The foreground clients are the user interface programs. Currently there are two styles of user interfaces. 1) commands and qualifiers. and 2) MOTIF interface. These foreground processes just communicate with the user and then request the server to perform all activities.

The server process performs all database I/O, network communications, scheduling state changes, batch job submissions, and activity monitoring. If a foreground process makes a request that refers to a remote node then the server will route that request across the network to the designated node. 12

2.4.3 Net proxy requirements

All network communications are done by sending a request packet from one server process on a node to a server process on another node. The SCHEDULE server declares itself a network object. 13

No proxy logins are used to implement this communications link.

Once an incoming request has been received by the server process from a remote node a check is made to determine if the user on that remote node has the needed privileges to perform the requested function. This check is done using the following steps.

  1. Use the access rights of the local user and compare those with the protection settings of the object that is being accessed. If the access is refused then reject the request.

The parameter NETPROXY_DEFAULT_USER can be set to establish a local user name to be used in the event that a search through the NETPROXY file does not turn up any equivalences.

It is important in a Peer-to-Peer or Satellite-Central configuration to be sure to set up proxy equivalences for all users 15 that will be communicating over the network.

2.4.4 Networks

Network communications are involved in several different aspects of the SCHEDULE system. The two main areas are database locality and job interconnection in a network.

There are two ways to inform the system that you wish to communicate with a remote node. The first method is to add the -NODE=NODE_NAME to any command. For example to do a directory operation on a remote node.


 
Schedule> ls * -node=ise 
 

The second method entails defining a logical name to contain the target node. The following commands will give the same result as the above one.


 
 
UNIX> setenv schedule_database_node ise 
 
 
Schedule> ls * 
 
 

2.4.4.1 Database locality

Usually a single SCHEDULE database is installed inside a single UNIX management domain (referred to as the SYSTEM in this documentation). A single management domain includes all UNIX systems clustered together that share such things as the authorization file, queue file and online disks. Sometimes it is advantages to have a single SCHEDULE database across all systems in the network even though they are not in the same management domain.

A Peer-to-peer configuration is where each system has its own SCHEDULE database and on occasion needs to synchronize jobs between these systems. This mode allows each system to do most of its own work without regard to the up/down status of the other members of the network. A network request will wait for the remote node to become accessible before completing.

A Satellite-Central configuration is where a system is defined as the repository of the central SCHEDULE database. All other nodes in the network indicate 16 that the database is located on this central node. 17

In this mode the central system must be up and accessible for any scheduling activity to be performed on a remote node. This mode is generally used in an environment with a large central cluster and many standalone workstations that are connected to a common network. The workstations are too small to merit a full SCHEDULE set up but certain jobs need to be periodically run on those systems.

2.4.4.2 Job interconnections

There are two methods for interconnecting jobs across a network. Jobs are interconnected by defining a prerequisite or an initiate condition between any two jobs. A network connection is when these two jobs are on different nodes in the network. 18

In a Peer-to-Peer configuration a node name is added to each entry on the prerequisite or initiate list entry that is on a remote node. A typical prerequisite list would appear as follows.


 
 
Schedule> more -prerequisite /demo/a/report_3 
 
 
 
/demo/a/report_3.prerequisite 
 
 
    NODE1::UPDATE_A 
    UPDATE_B 
    NODE2::UPDATE_C 
 

In a Satellite-Central configuration a node name is added to the submit attributes of any job that is to execute on a remote node. There is no need to add the remote node name to the initiate or prerequisite list entry. For example, to mark a job to execute on the remote system NODE1 use the following commands.


 
Schedule> chjob update_a -submit=node:node1 
 

2.4.5 Hetrogeneous clusters

A hetrogeneous cluster is an environment where the hardware is one large cluster (i.e. shared HSC's, disk drives, tape drives) but the systems in the cluster are run as two or more groups. Each group has its own AUTHORIZE file, queue file and system disk. Each group would also have it's own SCHEDULE database.

The lock manager is distributed across the whole HARDWARE cluster regardless of whether the system disk is the same or different. The SCHEDULE system uses the lock manager to synchronize activity between the various server processes. These locks must also be partitioned along these same lines.

There are three parameters19 which control how the SCHEDULE System partitions locks. They are listed below.

2.4.6 System database

The files that make up the SCHEDULE database are listed below.

All these three files are standard RMS files. On occasion it is useful to re-organize and re-claim free space in these files. A command procedure has been provided to perform this operation.


 
 
$ @schedule_library:schedule_convert 
 
 
Select CONVERT operation to perform: 
 
(The server should not be running when 
 any of these operations are done.) 
 
    1. CONVERT CONTROL.DAT 
    2. CONVERT HISTORY.DAT 
    3. Reclaim space in QUEUE.DAT 
    4. all of the above 
 
    9. EXIT 
 
Function: 4 
 

Note

10 A job can be extracted from one group and then inserted into another. Use the SCHEDULE_EXTRACT

12 The server interface is composed of two executable images. SCHEDULESHR contains all the routines that are called by the foreground applications. These routines then communicate with SCH0_SERVER which is the server process.

13 The NCP object number used is 0.

15 When a job has a remote prerequisite or initiate it is the user name of job that matters. For interactive use it is the user name of the person logged in on the terminal.

16 The parameter DATABASE_NODE in the SCH0_PARA in the SCH0_PARAMETER.DAT file indicates on what node the central database resides.

17 Typically the central system is a cluster. In this case it is important to specify the cluster alias as the node name. This will allow the remote systems to communicate with any up member of the central cluster.

18 For network operations to work all nodes involved must be running the SCHEDULE System.

19 The parameters are contained in the file SCH0_PARAMETER.DAT.


Previous Next Contents Index