On September 25, 2012, at the ISA Automation conference in Orlando, we witnessed a demonstration of the Oil & Gas Interoperability Pilot (OGI Pilot), the most significant information exchange project to date in the capital projects industry. Here we will describe what happened during the demonstration and give you a peek under the hood at the things that powered it.
This correspondent grew up in the engineering offices of operating plants and the offices of EPC contractors who designed the plants. In this environment it was easy to imagine that the highly detailed 3D models and specifications we created for a plant were the center of that plant’s universe. This demonstration put the engineering documents into their proper perspective and opened our eyes to what is really involved in getting the Operations and Maintenance (O&M) systems running.
The OGI Pilot demonstration showed that by applying industry standards that already exist we can take the important bits from a set of intelligent P&IDs, and with a little ingenuity create a system that automatically feeds the O&M systems that actually run the plant. When it is in production, this system will become an actual ecosystem where the individual systems, coming from O&M system suppliers, EPC contractors, and others, will feed each other using vendor-neutral standards for information exchange. As a natural result, commissioning a plant will become much more reliable and much less time-consuming. By demonstrating that direct, machine-to-machine information exchange is possible, the ecosystem established by the OGI Pilot creates a community where suppliers of Commercial Off the Shelf (COTS) systems can showcase how they interoperate with others. This open community spans the entire lifecycle of capital facilities and includes process plants, offshore platforms and manufacturing facilities. And since the demonstration used published industry standards and COTS software it is repeatable on real projects. There are three main parts to this demonstration, shown in Figure 1, each a significant departure from what is usually done.
- Handover using iRING
- O&M System Exchange using Industry Standards
- Automatic Provisioning of O&M Systems
Handover using iRING
The demonstration showed a realistic scenario where three organizations, AVEVA, Bentley Systems, and WorleyParsons each took the role of an EPC contractor designing a debutanizer using their own engineering tools. WorleyParsons did the original design as if it were a real FEED project and gave the drawings and tag data to the other two. To ensure the realism of the scenario, AVEVA and Bentley imported WorleyParsons’s information and turned it into native P&IDs authored with their own tools.
For the turnover, instead of handing over data in proprietary form or printed to PDFs, each “EPC Contractor” exported its data in one of the standard forms of iRING. Each of these export methods is slightly different, but in all cases contained tag data and graphics information.
Intergraph’s SmartPlant P&ID and AVEVA’s P&ID both use the Proteus Schema which is based on ISO 15926 Part 4 (Reference Data), while Bentley’s OpenPlant Power P&ID uses an export system based on ISO 15926 Part 8 (OWL) and Part 4 definitions. (There are some interesting differences between these approaches but I will let their respective representatives venture onto that thin ice.) Chapter 3 of An Introduction to ISO 15926 will give you a head’s up on Part 4 and Part 8.
O&M System Exchange Using Industry Standards
Just as iRING is well-suited to exchange information between EPC contractors and suppliers, there are other information standards well-suited to exchange information between O&M systems. These standards were used to develop the four components of this demonstration.
The first is a transformation engine built by the University of South Australia (UniSA). It is used to convert the iRING data that was received from the EPC contractors into CCOM. The CCOM data, which includes both the tag data and graphics, is stored in the second component.
This second component was created by Assetricity and given the rather long name of the Integrated Operations and Maintenance for Oil and Gas Register (or IOM-OG Register for short.) The register is an important part of the ecosystem which we will describe in more detail in a future article. Very simply, it provides a sort of flow control center for the information moving through the OGI ecosystem, including the information that will eventually propagate to the O&M Systems.
The third component, the OpenO&M Information Service Bus Model (ISBM) is the glue that joins all of the individual systems together into an ecosystem. The service bus model is a way for a system to present its data to other systems and to receive information from them in return. It is a specification created by the OpenO&M Initiative and it allows vendor-neutral transport within the ecosystem eliminating the need for point-to-point integration. IBM provided the implementation of the ISBM for this phase of the OGI Pilot.
The fourth component is the Unified Architecture (UA) standard published by the OPC Foundation which manages the data generated by automation and control systems.
Automated O&M Systems Provisioning
In this demonstration, a process historian, OSI Soft’s PI, and IBM’s Information Integration Core (IIC) are stand-ins for all O&M systems.
An historian is a system which meticulously records the history of all of the readings of all of the instruments in a plant. But to get the historian started it needs to be provisioned with the identity of all of the instruments. Typically this tag data is keyed in manually from project documents, but in this demonstration the data loading for the historian was coordinated with the register and done automatically. In addition to the obvious increase in speed and reliability, this means that the relationships between the tags and their respective P&ID drawings was not lost and can be used elsewhere.
IIC is a high-performance analytics engine which can also talk to the ISBM for supplier-neutral information exchanges within the OGI Ecosystem. In the live demonstration, after it was provisioned, IIC generated a Human Machine Interface (HMI) in the form of a view of one of the P&ID drawings. IIC uses this HMI to show people what is going on with the O&M systems when they are running. In this demo it was used to show simulated process activities in the debutanizer tower.
We must admit that this “HMI stuff” pretty much went over our head the first time we saw it. But to the Owner/Operators present, this was truly a significant moment. The P&IDs shown at the end of the demonstration were not simply copies of those used for input; they were generated from internally stored information. This showed that both the IIC and data historian have been automatically provisioned with the information needed to exchange and visualize O&M activity after startup.
Usual Handover Practice
If you grew up watching Hollywood movies about computers you might be thinking “OK, so we’ve got computers talking to computers. What’s the big deal?” To really understand the significance of this demonstration we need to take a small detour and look at the way handover is usually done.
From the point of view of an EPC contractor, most projects are complete when the plant is built, handed over, and the documents that were used to design and build it (the “deliverables”) are delivered. And while there are a few leading edge Owner/Operators who are starting to ask for a data-centric handover, the state of the art for most facilities is PDF files.
Now there is nothing inherently wrong with PDF files—in fact they are a definite improvement over the way it used to be. When this correspondent started his career, document handover was a couple of semi-trailers full of paper. Every decade or so the volume of the physical media shrank an order of magnitude until nowadays document handover on DVDs could fit into a couple of largish suitcases.
Compared to the old way, having a project’s contractors and suppliers send in PDF files is a breeze.
But a major bottleneck with PDF files is that they are formatted for human readers, not machines. A person still has to open each file, look through it to select certain data values, and then copy the data somewhere else. Typically, information is prepared individually for each system, which has the side effect of striping the context (that is, the origin) from the data. These activities are labor intensive, prone to human error, and because they usually occur late in the construction phase, are severely constrained by time.
In an effort to alleviate a large data entry crunch, some Owner/Operators (the leading edge folks above) ask suppliers to load the most important data values into spreadsheets of a certain form such that they can be loaded directly into the O&M systems. But all that does is move the data entry to a so-called high value engineering center. Someone still has to decide which of the O&M systems to load it in to. And as figure 3 shows, these are not all simple one-to-one relationships.
Provisioning O&M Systems
The most important part of commissioning a facility is the setup and management of its computerized control systems. This is the field of Computer Integrated Manufacturing (CIM) and we are not going to plumb its depths here. But briefly, there are generally several layers of control between, say, an individual instrument in the field and the Enterprise Resource Planning (ERP) system which orchestrates the entire facility. Each layer gathers information from lower levels, processes it in some way, and then passes it up the hierarchy. (If you are curious, Google for Purdue Reference Model for Computer Integrated Manufacturing.)
The procurement decision for most of these layers is generally a function of which manufacturer supplies the physical assets. But there is always a Malcolm in the Middle which has to be custom built. It aggregates information from the lower-level systems for consumption by the ERP system. It is generally a system of systems (SoS) which, essentially, means that system integrators construct point-to-point exchanges between the appropriate O&M systems. But because the design of the SoS depends on the procurement decisions for systems in other layers, it often cannot be built until the facility is well into construction, with the obvious schedule constraints. A common result is that the data entry for the SoS is “good enough” to get the facility running but missing chunks, and with a complete loss of context, or history of where the data comes from.
The OGI Pilot Does It Better
The OGI Pilot turns the standard practice around by making the provisioning of the O&M systems for the newly constructed facility as important as the facility itself. Having made this change in basic assumption, two things become obvious.
- Since the “digital asset” is as important as the “physical asset”, the information required for engineering and O&M activities after the facility is commissioned becomes a driver for defining handover content. The natural approach, then, is to work backwards from the O&M phase repeatedly asking the question “What information is required from earlier phases to properly manage this phase?” What emerges from this analysis is detailed information handover requirements for all project participants, from engineers, to suppliers, to constructors.
- The second is that the “digital asset” can be built more efficiently if the information handover is in a form that is fully machine readable using supplier-neutral, published standards. In this day and age all of the information required by O&M systems already exists in the databases of the project participants. When you think about it, it is silly to “dumb it down” by printing it to PDF files only to require other people to select certain of the values and manually key them in again.
With these two ideas in place, automated provisioning of the O&M systems becomes a practical goal. The SoS which will eventually operate and maintain the facility can be designed before final procurement decisions are made. The current ad hoc practices can then be replaced with improved methods which decrease cost and risk while improving schedule. This sets the stage for life-cycle interoperability continuing into and through the O&M phase of the physical and digital asset.
This demonstration, the first for the OGI Pilot, demonstrates two important achievements that make the automation of handover and provisioning practical in the real world.
Industry Standards for Handover and Provisioning
This demonstration only used tools that are either commercially available right now, or that were built from published industry standards. The functions that exported the iRING exchange files from the P&ID systems are commercially available right now. Three software systems were written for this project but they are based on published standards and so can be implemented by anyone. The Transform Engine provided by UniSA uses iRING and MIMOSA’s CCOM. The register provided by Assetricity is based on CCOM. The ISBM implementation provided by IBM is based on the OpenO&M ISBM specification and it transported iRING and CCOM data.
The use of industry standards for information exchange will make handover and provisioning much less chaotic and more reliable. Things that currently have to be custom-built, such as the System of Systems control layer that integrates the commercial layers, can now be created in advance rather than having to wait until all of the procurement decisions have been made.
Automatic O&M Provisioning
There are obvious schedule, quality, and cost benefits of automatic provisioning instead of manual provisioning. Because the information that is handed over is machine-readable it can be managed in a way that preserves the origin of the data. In the future it will be possible to provision O&M systems by “pushing” the required data into the appropriate O&M system on a command basis, or by a “pull oriented” self-provisioning process whereby the O&M systems request the provisioning information as soon as it is connected.
Next Steps for the OGI Pilot
The overarching objective of the OGI Pilot is a sustainable full life-cycle ecosystem for interoperability. This ecosystem will include not only standardized information, but a standardized systems architecture. The use of standards will enable sustainable System of Systems interoperability for all of the systems that make up the facility, whether it be a processing plant, an offshore platform, or any other complex facility.
Work is going on in a number of areas.
- Continued development of the permanent OGI test bed. The OGI Pilot work is being done to a level of detail and scale of a real industry project. The data sets used in this demonstration are intended to be a permanent interoperability test bed available for the industry. To this end the P&IDs used in this first demonstration are being extended from the FEED stage of a project to something typical for the detailed engineering phase.
- Identifying and filling gaps. The OGI Pilot is working alongside several groups from PCA and Fiatech to plug some gaps that became apparent in the demonstration. For instance, a major effort is underway to identify extensions to the iRING Reference Data Library that have been made by a number of other projects and add them to the master catalogue being developed by the JORD project.
- More participants. Some new EPC contractors have stepped forward to participate. This will ensure continued realism and new eyes to validate that the processes can be scaled into the real world.
- Adding Use Cases. This demonstration dealt with two Use Cases, Continuous Digital Handover (Greenfield Handover), and Automated Provisioning of O&M Systems. The next phase of the OGI Pilot will add O&M Use Cases in cooperation with O&M solutions providers.
For more information about the OGI Pilot, and about MIMOSA iteself, follow this link. (Free registration required.)
The Demonstration Video
A video of the demonstration is hosted on the MIMOSA website.