By: Rich Warren
1.0 INTRODUCTION
Dr. Pattie Maes, of the Massachusetts Institute of Technology Artificial Intelligence Laboratory, has researched situated agents that choose to take some course of action based on a current situation. In her A.I. Memo [1], Dr. Maes documents the framework necessary to reproduce the results of an academic example application of situated agents based on her agent archetiture.
This project is concerned with prototyping the Maes archetiture
and verifying its stability through replication of her example
results. The remainder of this report will address the design
and implementation phases of the prototype's development and will
address issues relevant to future development. A listing of the
prototype's input and output is found in appendix A while a listing
of all source code is included in appendix B. Reference items
are available through the Intelligent Systems Technical Library.
2.0 DESIGN
As it is likely that results of this effort will be integrated
with an Integrated AI Development Environment (IADE), an Object
Oriented Design (OOD) approach is followed. The remainder of this
section will describe the systems higher level classes and show
the relationship between many of the objects in the system.
2.1 Simulation Manager
The simulation manager maintains a list of pointers to one or
more competency networks. At each interation through the simulation,
this list is traversed from front to back and each of the competency
network managers is called to execute its self. Figure
1 illustrates the relationship between the simulation manager
and the network manager.
Figure 1. Object Relationship Between Simulation Manager and Network Managers
Notice that the network manager owns three additional managers:
goals manager, states manager and module manager.
Upon initialization and throughout the simulation executive loop,
the simulation manager calls upon each of its three individual
managers to execute themselves. Each of the three managers are,
for example, responsible their own initialization methods as well
as for spreading their share of activation throughout the network.at
run time. Figure 2 shows the relationship between the goal manager,
its goal objects and netowrk activation links.
2.2 Goals Manager
The goals manager is responsible for its own initialization methods
as well as for the spreading of activation backwards through the
network. Figure 2 shows the relationship between the goal manager,
its goal objects and network activation links.
The goal manager has access to the global network parameter gamma
used to weight the activation spread by the individual goals.
The member functions include accessors to gamma and to the list
of goals maintained by the goal manager.
Figure 2. Relationship between the goal manager, goal objects and activation links
The goal manager has access to the global network parameter gamma
used to weight the activation spread by the individual states.
The member functions include accessors to gamma and to
the list of goals maintained by the goal manager. The goal list
shown in Figure 2 is a list of pointers to Goal Objects.
Each of the goal objects contain a predicate and a collection
of internal member functions. As goals become activated from modules
that preceed them, each of the goal objects maintain a list of
predecessor links. In the example illustrated in Figure 2, the
goal object board-sanded maintains predecessor links to
the competency modules SAND-BOARD-IN-HAND and SAND-BOARD-IN-VICE.
As discussed in section 3.1 the value of the cardinality of the
set returned by M(j) is embedded in the link itself to obtain
maximum runtime effeciency.
2.3 State Manager
The states manager is responsible for its own initialization methods
as well as for the spreading of activation forwards through the
network. Figure 3 shows the relationship between the state manager,
its goal objects and network activation links.
Figure 3. Relationship between the state manager, state objects and activation links
The state manager has access to the global network parameter phi
used to weight the activation spread by the individual states.
The member functions include accessors to phi and to the
list of states maintained by the state manager. The state list
shown in Figure 3 is a list of pointers to State Objects.
Each of the state objects contain a predicate and a collection
of internal member functions. As states spread activation forward
to modules that succeed them, each of the state objects maintain
a list of successor links. In the example illustrated in Figure
3, the state object hand-is-empty maintains successor links
to the competency modules PICK-UP-SPRAYER and PICK-UP-SANDER.
As discussed in section <look-up> the value of the cardinality
of the set returned by A(j) is embedded in the link itself to
obtain maximum runtime efficiency.
2.4 Module Manager
The module manager has access to the global network parameter sigma used to weight the activation spread by the individual modules. The member functions include accessors to sigma and to the list of modules maintained by the module manager. The module manager has access to the global network parameter sigma used to weight the activation spread forward, backward and sideways by the individual modules. The member functions include accessors to sigma and to the list of modules maintained by the module manager. The module list shown in Figure 4 is a list of pointers to Module Objects. Each of the module objects contain a predicate and a collection of internal member functions. As modules spread activation to modules that precede, succeed and compete with them, each of the module objects maintain a list of predecessor, successor and conflicter links.
In the example illustrated in Figure 4, the module object PICK-UP-SPRAYER
maintains predecessor, successor and conflicter links. These links
are lists of pointers to Link Objects which embed the
value of the cardinality of the set returned by to the appropriate
look-up function (i.e., M(j), A(j), U(j)). These link objects
in turn point to he competency module to which the parent module
is to spread activation.
3.0 IMPLEMENTATION
Following the design phase of this prototyping effort, software implementation issues were considered. As it is likely that results of this effort will be integrated with IADE software, the decision was made to code the competency network using the C++ programming language. This decision takes advantage of a large base of existing software and development environments.
An incremental approach is taken for the development of the competency
network prototype. The first build, for example, included only
the implementation of the systems parser, description language
and network links. The second build saw the implementation of
the spreading of activation both backward from the goals and forward
from the states. This was sufficient to replicate Maes' results
through time 1 because at that time, the activation level of the
competency modules is equal to zero and multiplied
into the equations for spreading activation from the competency modules. In the third build functions and classes were written in support of spreading activation forward, backward and sideways from the competency modules at time 2. Note that the competency modules now had a history of activation from the goals and states at time t minus 1. The remainder of this section will discuss the implementation of the competency network's preprocessing and runtime modules.
3.1 Competency Network Preprocessing
In order to obtain maximum run time efficiency, much
processing is done at execution time but before the start of the
simulation's executive loop. This preprocessing includes
parsing the network description file, establishing network connectivity
links, and creating an efficient look-up scheme.
3.1.1 Network Description File
A proprietary language has been developed to allow a description
of the competency network to be read, at execution time, into
the system from an ASCII text file. This language has a fairly
straight forward syntax and is implemented using thirty reserved
words and a single pass parser. The parser comments text after
the semi-colon character (i.e., ";") and ignores white
space.
The network description file (NDF) has five sections which include
initialization commands, goal descriptions, state descriptions,
the competency modules descriptions and a section to instruct
the simulation executive to create the network links. Each of
the systems five global parameters used to 'tune' the spreading
activation is also set from within the appropriate section.
3.1.2 Initialization
The initialize section must include a line to command the system to create a NEW network and to associate that network with the name following the reserved word CMNET (e.g., "Player1"). Notice that more that one network may be described in a single file. A team of autonomous players can, for example, be described with a single description file by including an additional NEW CMNET Player2 command line. Figure 1 shows the data structure created by the initialization method.
The simulation manager is a list of pointers to competency network
objects (e.g., player_1, player_2,
player_N). At each iteration
through the simulation, the list is traversed such that program
control is passed to each network object successively. The networks
are described in terms of goals, states, and competency modules.
That is, each network object owns a goalManager,
stateManager and a moduleManager.
3.1.3 Network Goals Descriptions
The network goals are described in the network
data file as illustrated in Figure 5 below. A network Goal
Manager is created on line 1 to manage the goals described
in line 2.
1. GoalMgr{
2. Glist{ board-sanded self-painted
}Gl
3. SET GAMA 70
4. }GM
Figure 5. The Goals Manager
The amount of activation (i.e., gamma) the goals spread forward
into the network is set in line 3 and the end of the goals description
is signaled on line 4. For detailed discussion on gamma
see page seven of the Maes' paper.
3.1.4 Network State Descriptions
The network states are described in the network data file as
illustrated in Figure 6 below. A network State Manager is created
on line 1 to manage the states described on lines 2 and 3. Line
4 closes the state list while the amount of activation (i.e.,
phi) the state manager spreads forward through the system is set
on line 5.
1. StateMgr{
2. Slist{ hand-is-empty sander-somewhere sprayer-somewhere operational
3. }SL
4. SET PHI 20
5. }SM
Figure 6. The States Manager
For detailed discussion on phi see page seven of the Maes paper.
3.1.5 Network Competency Module Descriptions
The network competency modules are described in the network data
file as illustrated in Figure 7 below. A Competency Manager
is created on line 1 to manage the modules defined following the
reserved word DefMod.
1. ModuleMgr{
2. DefMod{
3. SET NAME PICK-UP-SPRAYER
4. CondList{ sprayer-somewhere hand-is-empty }CL
5. AddList{ sprayer-in-hand }AL
6. DelList{ sprayer-womewhere hand-is-empty }DL
7. }DM
8.
9 SET SIGMA 50
10. SET THETA 90
11. }MM
Figure 7. The Competency Module Manager
The reserved word DefMod on line 2 instructs the network
constructor to define a new competency module using the module
description that follows. As seen in the NDF used in the software
development phase of this project, several module descriptions
may be defined from within the module manager construct [3]. Line
3 of Figure 3 shows that the module's name is set to pick-up-sprayer
while lines four through six are used to construct the precondition
list, add list, and delete list introduced on
page three of the Maes paper. Line 7 is a reserved word to close
the module definition while lines 9 and 10 set the global parameters
sigma and theta.
Sigma is defined as the amount of activation energy a protected
goal takes away from the network while theta defines the amount
of activation energy a proposition that is observed to be true
injects into the network. For detailed discussion on sigma
and theta see page seven of the Maes paper.
3.1.6 Network Links
The algorithms for establishing successor, predecessor
and conflicter links throughout the network have been implemented.
These algorithms build on the archeticture shown in Figure 4 through
an automated analysis of the precondition, add and delete lists.
The rules that govern the connectivity are described on page 4
of the Maes paper while a Figure showing the resultant archetecture
with the network links is shown in Figure 4.
The implementation approach for establishing successor and predecessor
links takes advantage of the fact that there exist a predecessor
link for every successor link in the network. By following the
rules of connectivity then, one need only create a pointer to
each of the two modules to be linked. In this fashion, the modules
can easily be made successor and predecessor of one another. The
approach taken for the conflicter links closely follows the rule
specified in the Maes paper.
As the network manager owns a goals manager, states manager and
module manager, establishing network links involves requesting
the precondition, add and delete lists from each of the managers.
Those lists are then traversed using nested loops in such a way
that the rules that govern connectivity are followed and the links
established.
3.1.7 Look-up Functions
The calculation of activation that each module spreads forward,
backward and sideways is computed at each iteration through the
simulation. One factor involved in the calculations is the cardinality
of several sets containing modules that match some given criteria.
Section 3 on page 8 of the Maes paper specifies the need for three
functions M(j), A(j), and U(j) that return
the sets from which the cardinality is easily determined. Of course
once the netwrork is established and all the predicates in the
system are known, the cardinality of these sets becomes static
so to compute these values at each iteration is not an option.
A look up table would be a much more efficient runtime approach.
That is, compute the cardinality of the sets once and store the
values in a tabular format. Even this appoach can be optimized
however because the table will need to be accessed once per each
link of each module of each iteration through the simulatioin.
The approach settled on for this effort was to compute the value
and embed that value directly in the link itself as seen in Figure
4. Though finding these values requires some redundant programming
and is not at all efficient, this is preprocessing and
only done once before the start of the simulation loop. The savings
in run time performance more than outway the cost of redundant
preprocessing.
This section will describe the implementation of several of the
system's runtime modules. Section 3 on page 6 of the Maes paper
presents a mathematical the agents architecture while section
2 presents the global algorithm that performs a loop in which
at every timestep several steps of computation take place overall
of the competence modules. This section will walk through section
3 and check off those functions that are completed and comment
on those that are left to complete. Afterward, the global algorithm
will be walked through. Those steps that are completed will be
checked off while comments will be made on those that are left
to complete.
3.2.1 The Mathematical Model
3.2.2 The Global Algorithm
Section 2 on page 3 of the Maes paper presents a global algorithm
that performs a loop in which at every timestep makes several
computations over all of the competence modules. This section
will walk through the algorithm and check off those steps that
are completed and comment on those that are left to complete
3.2.2.1 Impact of the State
Page 9 of the Maes paper describes, in mathematical terms, the
spreading of activation to module x from the state at time
t. This method illustrates the use of the preprocessed look-up
table described in the previous section as well as the global
parameter as described previously in this report. This
method is completed and its accuracy verified as seen by comparison
of the output of the exampled offered in the Maes paper on page
15 and the output of this prototyping effort as seen in appendex
B.
3.2.2.2 Impact of the Goals
Maes also describes the spreading of activation to module x from
the goals at time t. This method illustrates the use of the preprocessed
look-up table described in the previous section as well as the
global parameter as described previously in this report. This
method is completed and its accuracy verified as seen by comparison
of the output of the exampled offered in the Maes paper on page
15 and the output of this prototyping effort as seen in appendex
B.
3.2.2.3 Impact of the Protected Goals
Work towards the completion of code needed for implimentation
of the spreading of activation from the protected goals to module
x has yet to begin. To date, however, this has not had
an impact on the project as in the example robots simulation the
goals do not become protected until later in the simulation (i.e.,
t=8).
3.2.2.4 Spreading Activation Through Successor Links
Page 9 of the Maes paper describes, in mathematical terms, the backward spreading of activation to module y from module x at time t. This method illustrates the use of the preprocessed look-up table for #A(j). The spreading of activation backward requires two additional methods for determining the executable status of module x and S(t) which determines if the preposition currently spreading activation is an element of the state.
This method is completed and its accuracy verified as seen by
comparison of the output of the exampled offered in the Maes paper
on page 15 and the output of this prototyping effort as seen in
appendix B.
3.2.2.5 Spreading Activation Through Predecessor Links
Page 10 of the Maes paper describes, in mathematical terms, the forward spreading of activation to module y from module x at time t. This method illustrates the use of both the preprocessed look-up table for #M(j) and the global parameters and . The spreading of activation backward requires two additional methods for determining the executable status of module x and S(t) which determines if the preposition currently spreading activation is an element of the state.
This method is completed and its accuracy verified as seen by
comparison of the output of the example offered in the Maes paper
on page 15 and the output of this prototyping effort as seen in
appendix B.
3.2.2.6 Spreading Inhibition Through Conflicter Links
Page 10 of the Maes paper describes, in mathematical terms, the sideways spreading of inhibition to module y from module x at time t. This method illustrates the use of both the preprocessed look-up table for #U(j) and the global parameters and . The spreading of inhibition requires two additional methods for determining the maximum of two values and S(t) which determines if the preposition currently spreading inhibition is an element of the state.
This method is completed and its accuracy verified as seen by
comparison of the output of the example offered in the Maes paper
on page 15 and the output of this prototyping effort as seen in
appendix B. Note: though the results of this prototyping effort
match the results offered in the Maes paper, there is a question
with regard to the validity of the results. See the section
on problems for further detail.
3.2.2.7 The Decay Function
This vapor function is designed to ensure that the overall activation
level remains constant. Little is known about the specifics of
this function as Dr. Maes does not offer any detail on the mathematics
behind the decay process. Considerable effort has been made in
attempt to derive the function from an analysis of the input/output
at several time stamps but to date, no solution has been found.
An attempt is however being made to contact Dr. Maes to ask for
assistance.
4.0 FUTURE DEVELOPMENT
The primary focus on the current phase of this project is to replicate
the results of Dr. Maes' example robot results to establish
a baseline software implementation of the situated agents architecture.
The remainder of this section will summarize the status of the
projects executable code, identify and address present concerns
and discuss plans for completion of the baseline architecture.
4.1 Status of Executable Code
Because Dr. Maes' agents architecture has been verified and validated [1,2,3] we have elected to establish our baseline implementation of the architecture through replication of the results of her robot example. This example completes execution after 19 iterations through the simulation loop. It has been determined, however, that 90 percent of the implementation must be complete in order to iterate through the first time step. This is due to the large amount of preprocessing necessary to establish the network connections and to the fact that nearly all of the algorithms and supporting math must be in place even to replicate the results of the first iteration through the simulation loop.
This research program's executable currently steps through 1.5
iterations. A comparison of the Maes output at time t(2) against
the GreyStone output ant time t(2) shows that the spreading of
activation from the state forward to the conceptual modules, from
the goals backward to the modules, and from the modules themselves
to each other is accurate. Further comparison at time t(2), however,
shows differing "activation-levels of modules after decay."
The system's output is included in appendix A.
4.2 Present Concerns
Although a close study of Maes' MIT A.I. memo reveals much about
the nature of a system decay function, specific details
are not included. An effort in now underway to contact Dr. Maes
to ask for assistance in flushing out these details. Should this
effort prove fruitless, it is our opinion that a compatible decay
function can be engineered to produce comparable baseline results.
4.3 Future Development
As stated above, the baseline autonomous player architecture is
90 percent complete. After a suitable decay function is introduced,
work towards the completion of the baseline is expected to progress
quickly. Future work will include the design and implementation
of those functions and algorithms shown below in Figure 9.
Notice that the first two functions listed in the above figure are look-up and return functions. These functions will involve algorithms to traverse the architecture's data structures to find and return the requested propositions. As several such traversal algorithms currently exist in support of similar functions, much of the code to implement the above look-up functions will be reused.
The activation function taken_away_by_protected_goals is similar to five existing functions for spreading activation through the network. Code from these existing functions can be reused to develop taken_away_by_protected_goals.
The definition of the active function appears on page 6
of the Maes paper. This definition indicates that there are three
criteria that must be met in order for a module to become active.
The first, executable, is implemented and its value stored within
the module object. The second criteria involves a comparison of
the activation level with theta. The activation level is implemented
and its value stored within the module object. As theta is also
currently available, the implementation of this function involves
only a few test for logical equivalence.
5.0 PROJECT SUMMARY
Work has progressed well. To date over 90 percent of the system
architecture described in Maes[1,2] is implemented using OOD principals
and programming techniques. The development environment is currently
UNIX based and an executable copy of the prototype system in its
current configuration is available upon request. A listing of
the input and output is included in appendix.
6.0 BIBLIOGRAPHY
[1] Maes, P., (1989) "The Dynamics of Action Selection." in MIT A.I. Memo No. 1180
[2] Maes, P., (1990)
Situated Agents Can Have Goals in Robotics and Autonomous
Systems,
Elsevier Science Publishers
[3] Blumberg, B., (1995) "Multi-Level Direction of Autonomous
Creatures for Real Time
Virtual Environments", submitted for publication, Siggraph
'95
[4] Blumberg, B., (1995) "Multi-Level Control for Animated Autonomous Agents"
; ________________________________________________________
;
; Robot Competancy Module Network Data File
;
; ________________________________________________________
; allocate a new CM Network named Robot1
NEW CMNET Robot1
; The network Goals
GoalMgr{
GList{ board-sanded self-painted }GL
SET GAMA 70
}GM
; The network States
StateMgr{
SList{ hand-is-empty sander-somewhere
sprayer-somewhere operational board-somewhere
}SL
SET PHI 20
}SM
; The network Competancy Modules
ModuleMgr{
DefMod{
SET NAME PICK-UP-SPRAYER
CondList{ sprayer-somewhere hand-is-empty }CL
AddList{ sprayer-in-hand }AL
delList{ sprayer-somewhere hand-is-empty }DL
}DM
DefMod{
SET NAME PICK-UP-SANDER
CondList{ sander-somewhere hand-is-empty }CL
AddList{ sander-in-hand }AL
delList{ sander-somewhere hand-is-empty }DL
}DM
DefMod{
SET NAME PICK-UP-BOARD
CondList{ board-somewhere hand-is-empty }CL
AddList{ board-in-hand }AL
delList{ board-somewhere hand-is-empty }DL
}DM
DefMod{
SET NAME PUT-DOWN-SPRAYER
CondList{ sprayer-in-hand }CL
AddList{ sprayer-somewhere hand-is-empty }AL
delList{ sprayer-in-hand }DL
}DM
DefMod{
SET NAME PUT-DOWN-SANDER
CondList{ sander-in-hand }CL
AddList{ sander-somewhere hand-is-empty }AL
delList{ sander-in-hand }DL
}DM
DefMod{
SET NAME PUT-DOWN-BOARD
CondList{ board-in-hand }CL
AddList{ board-somewhere hand-is-empty }AL
delList{ board-in-hand }DL
}DM
DefMod{
SET NAME SAND-BOARD-IN-HAND
CondList{ operational board-in-hand sander-in-hand
}CL
AddList{ board-sanded }AL
delList{ }DL
}DM
DefMod{
SET NAME SAND-BOARD-IN-VICE
CondList{ operational board-in-vice sander-in-hand
}CL
AddList{ board-sanded }AL
delList{ }DL
}DM
DefMod{
SET NAME SPRAY-PAINT-SELF
CondList{ operational sprayer-in-hand }CL
AddList{ self-painted }AL
delList{ operational }DL
}DM
DefMod{
SET NAME PLACE-BOARD-IN-VICE
CondList{ board-in-hand }CL
AddList{ hand-is-empty board-in-vice }AL
delList{ board-in-hand }DL
}DM
SET SIGMA 50
}MM
;Instructs Competancy Module Manager to create network
LINKS
LINKS
END
; ________________________________________________________
OUTPUT LISTING OF SIMULATION THROUGH 2 TIMESTEPS
FILE=netLinks.dat;
TIME 1
goals give SAND-BOARD-IN-HAND an extra activation
of 35
goals give SAND-BOARD-IN-VICE an extra activation
of 35
goals give SPRAY-PAINT-SELF an extra activation
of 70
state gives PICK-UP-SPRAYER an extra activation
of 3.33333
state gives PICK-UP-SANDER an extra activation
of 3.33333
state gives PICK-UP-BOARD an extra activation
of 3.33333
state gives PICK-UP-SANDER an extra activation
of 10
state gives PICK-UP-SPRAYER an extra activation
of 10
state gives SAND-BOARD-IN-HAND an extra activation
of 2.22222
state gives SAND-BOARD-IN-VICE an extra activation
of 2.22222
state gives SPRAY-PAINT-SELF an extra activation
of 3.33333
state gives PICK-UP-BOARD an extra activation
of 10
activation-level PICK-UP-SPRAYER : 13.3333
activation-level PICK-UP-SANDER : 13.3333
activation-level PICK-UP-BOARD : 13.3333
activation-level PUT-DOWN-SPRAYER : 0
activation-level PUT-DOWN-SANDER : 0
activation-level PUT-DOWN-BOARD : 0
activation-level SAND-BOARD-IN-HAND : 37.2222
activation-level SAND-BOARD-IN-VICE : 37.2222
activation-level SPRAY-PAINT-SELF : 73.3333
activation-level PLACE-BOARD-IN-VICE : 0
TIME 2
goals give SAND-BOARD-IN-HAND an extra activation
of 35
goals give SAND-BOARD-IN-VICE an extra activation
of 35
goals give SPRAY-PAINT-SELF an extra activation
of 70
state gives PICK-UP-SPRAYER an extra activation
of 3.33333
state gives PICK-UP-SANDER an extra activation
of 3.33333
state gives PICK-UP-BOARD an extra activation
of 3.33333
state gives PICK-UP-SANDER an extra activation
of 10
state gives PICK-UP-SPRAYER an extra activation
of 10
state gives SAND-BOARD-IN-HAND an extra activation
of 2.22222
state gives SAND-BOARD-IN-VICE an extra activation
of 2.22222
state gives SPRAY-PAINT-SELF an extra activation
of 3.33333
state gives PICK-UP-BOARD an extra activation
of 10
PICK-UP-SPRAYER spreads 1.90476 forward to
PUT-DOWN-SPRAYER for sprayer-in-hand
PICK-UP-SPRAYER spreads 0.952381 forward to
SPRAY-PAINT-SELF for sprayer-in-hand
PICK-UP-SANDER spreads 1.26984 forward to
PUT-DOWN-SANDER for sander-in-hand
PICK-UP-SANDER spreads 0.42328 forward to
SAND-BOARD-IN-HAND for sander-in-hand
PICK-UP-SANDER spreads 0.42328 forward to
SAND-BOARD-IN-VICE for sander-in-hand
PICK-UP-BOARD spreads 1.26984 forward to
PUT-DOWN-BOARD for board-in-hand
PICK-UP-BOARD spreads 0.42328 forward to
SAND-BOARD-IN-HAND for board-in-hand
PICK-UP-BOARD spreads 1.26984 forward to
PLACE-BOARD-IN-VICE for board-in-hand
SAND-BOARD-IN-HAND spreads 37.2222 backward
to PICK-UP-BOARD for board-in-hand
SAND-BOARD-IN-HAND spreads 37.2222 backward
to PICK-UP-SANDER for sander-in-hand
SAND-BOARD-IN-HAND decreases (inhibits) SPRAY-PAINT-SELF
with 26.5873 for operational
SAND-BOARD-IN-VICE spreads 18.6111 backward
to PLACE-BOARD-IN-VICE for board-in-vice
SAND-BOARD-IN-VICE spreads 37.2222 backward
to PICK-UP-SANDER for sander-in-hand
SAND-BOARD-IN-VICE decreases (inhibits) SPRAY-PAINT-SELF
with 26.5873 for operational
SPRAY-PAINT-SELF spreads 73.3333 backward to
PICK-UP-SPRAYER for sprayer-in-hand
activation-level PICK-UP-SPRAYER : 86.6667
activation-level PICK-UP-SANDER : 87.7778
activation-level PICK-UP-BOARD : 50.5556
activation-level PUT-DOWN-SPRAYER : 1.90476
activation-level PUT-DOWN-SANDER : 1.26984
activation-level PUT-DOWN-BOARD : 1.26984
activation-level SAND-BOARD-IN-HAND : 38.0688
activation-level SAND-BOARD-IN-VICE : 37.6455
activation-level SPRAY-PAINT-SELF : 127.46
activation-level PLACE-BOARD-IN-VICE : 19.881
========== EOF ===========
CLOSING.