Fig1- Components of Process Integration PI 7.1
The Various Components of Process Integration PI 7.1 are
SAP Solution Manager is used to manage the entire SAP solution landscape which seems to be a challenging task. Companies can minimize risk and increase the reliability of their IT solutions. SAP Solution Manager helps reduce TCO throughout the solution life cycle.
ESR is a central repository of information that contains all the services. ESR is a container, stores all the underlying Meta data of application objects like service interfaces and descriptions. The global data types, interfaces and business processes maintained in Enterprise service repository which can be reuse where needed
Service Registry is a common pool available in SOA platform where the services of an enterprise are shared. Providers publish the services in the registry and Consumers discovers the services that need to be consumed. Service Registry is the UDDI part of the Enterprise Service Repository (ESR) which enables service consumers to find services.
The Integration Directory is the central tool for configuring the processing of messages, such as the systems and external communication partners that are involved in the process, the routing rules that govern the message flow between these entities, as well as the settings for communication incl. security.
The Integration Server is the runtime environment to provide secure, standards-based, reliable, and scalable communication between provider and consumer applications
The Advanced Adapter Engine AAE provides built-in mediation capabilities to reconcile incompatible protocols, structural maps, schema, and data formats between provider and consumer applications which eliminate the need for ABAP Stack during the process.
The SAP NWA safeguard the deployment and operations of the processes in order to ensure runtime governance, security with access control, authentication, auditing, enforcement of compliance to policies, and monitoring of the service execution.
It handles End to End Monitoring, Performance Monitoring, Message Monitoring, Component Monitoring, Alert Monitoring, Adapter Monitoring, Cache Monitoring, Sequence monitoring and Logging and tracing.
The System Landscape Directory of SAP NetWeaver (SLD) serves as a central information repository for your system landscape. A system landscape consists of a number of hardware and software components that depend on each other with regard to installation, software updates, and demands on interfaces.
Before I continue, let me tell you what I felt whenever PI 7.3 was being mentioned by the various speakers including Sindhu Gangadharan and Udo Platzer from the product development team focused on SAP PI. There was a clear and evident emphasis on the fact that SAP is continuously looking forward to invest in SAP PI as a strategic product in the Netweaver suite and that they will continue to upgrade PI in terms of SP, EHPs or even a new release itself.
So that does sound music to ears or do we need more convincing? Well that will be decided as the roadmap evolves. But there are signs, good signs I mean and as of now its been a positive verdict on PI from TechEd 2010.
So now what is really new in PI? What the buzz around PI 7.3?
In a nutshell, to summarize the three main pitch from SAP are the below;
1. Centralized Monitoring
2. Single Stack ESB
3. Reduced TCO
With 7.3 (and 7.11 SP06 partially), along with Solution Manager 7.1, monitoring in PI will take a new shape. SAP has worked hard to deliver a cool new good morning page, a single screen overview of your PI system that will inform system administrators to check the health of the system and interface flows.There is a cool new ping functionality for communication channels (eg. you can now ping a File adapter and it will give you a detailed report of the ping confirming the accessibility fo the directory, filename etc)
There is a focus on incident management wherein you can have context sensitive operations that will help navigate to the issue, troubleshoot it, manage it and even escalate it via an incident ticket in Sol. Man and even raise a notification via email or SMS to users.
You can even add multiple PI domains which means that from a single page you can monitor and analyze root causes for N number of Pi systems.
Integration with external tools like Tivoli will now become much more easier.
My verdict: Awesome!!! This has been missing for years and this is definitely a much awaited feature many customer will want to utilize.
Single Stack ESB
This is something new to the PI world. A single stack, Java only deployment option for PI. This will look towards utilizing the AAE (SAP calls it Advanced Adapter Engine Extended [AEX]. This is the initial step from SAP to move to a single stack based PI which means that we will see the ABAP stack disappear in the coming years.
The single stack deployment allows customers to leverage on a low energy footprint, reduced HW and minimal downtime during restarts.
This also means that the adapters on the ABAP stack have now moved on to the Java stack (yes…. a Java based IDoc and HTTP adapter).
Eclipse now gets introduced to the ESR. So you will find NWDS-based editors for creating and editing service interface and data types. More details, refer this blog
There is significant improvement to performance and high volume data transfer, context sensitive view in ESR, support for multi mapping in AAE, Pub Sub for JMS adapters etc.
My verdict: There are some exciting changes that has addressed most of the pain points. The ESR – Eclipse integration is in its adolescent stage. Unless we see capabilities for maintaining java mappings etc being featured, I wouldn’t be that keen on it.
There has been major enhancements and efforts to reduce TCO for customers by SAP. The fault handling has been improved drastically. The buzz words here is Stability, Performance and SLA.
There are features like message blacklisting that will help enable automatic control on problematic messages that can cause system downtime. Advanced garbage collection and improved JVM instability detections.
There are now CTC templates provided by SAP which should simplify system installation and configuration. OOM handling and safe restart are other features.
SAP has also invested in queue handling (EOIO messages) , optimizing cache refresh.
A new protocol XI 3.1 has been introduced to increase performance.
An error queue feature stands out in the latest enhancements which will ensure that a message (EO) that goes to an error in a queue will not affect the other messages in the same queue. The error message will be automatically moved to n error queue so that other messages can continue to be processed.
There is also queue balancing that will ensure that messages will be distributed automatically to various queues which are free in terms of resources.
SAP looks to guarantee a Near Zero Down Time with the AEX.
My Verdict: The Reduced TCO was something that brought out a smile on my face. This I believe is one of the best features from a SAP PI 7.3 perspective which will enable PI to be a much more stabler product than it is today in a productive environment.
Well there are many more feature apart from the above in 7.3 but what I have mentioned here are what stood out and gained my attention. (Not to forget the ‘User defined Message Payload Search’ feature which is now productive from 7.3)
So what do you think about PI 7.3? Is it a story worth buying into?
Q: which adapter should you use while integrating with any SAP system? Explain why?
A: SAP gives us following options to communicate with SAP systems.
1. IDoc Adapter
2. RFC Adapter
Explanation: If you take a close look at the adapters specified here, the one thing that strikes right away is the usage of proxies. We know that proxy generation is possible only if your WAS is >= 6.20. So, that is one parameter that comes up straight away for the usage of proxies.
Hence Use Proxies only if the WAS version is >= 6.20. And the biggest advantage of the proxy is that it always by passes the Adapter Engine and will directly interact with the application system and Integration engine – so it will and should give us a better performance.
Next I’ll go for IDoc Adapter and last RFC adapter. What do you say guys????
Q: What is Program ID and where do you use it?
A: Program ID??? There is no idea in my mind but after 10 seconds I got it and the answer is: We will use it in RFC destination while creating RFC destination but I was not aware of what it does? That’s much only I could answer but I searched SDN and found out the following answer:
Well, the program id can be anything, even your name. But the catch here is that you should have the same in both the RFC destination and your RFC adapter.
Q: Where do you find Application Server(Gateway) and Application Server Service(Gateway)?
A: TCODE : SMGW -> Goto-> Parameters -> Display
You will find the required info under Attributes. Application Server(Gateway) is the Gateway hostname and Application Server Service(Gateway) is the Gateway service.
Usually the Gateway service is sapgwXX where XX is the system number.
Q: What is the main difference between RFC and IDOC Adapter and when we go for RFC and IDOC with ex?
A: IDoc and BAPI are both are SAP-objects. IDoc adapter used for ASync- communication. RFC adapter used for sync-async communication
Q: Sender Agreement is required for IDoc adapter? Why?
A: NO, Sender Agreement is not required for IDOC adapter, instead we will do settings from R/3 to XI system, so we don’t have any option to create an IDOC adapter in sender side, and we will trigger the IDocs through we19 and transfer the data.
Q: What is Global container in SAP XI?
A: Container object—> can be only used in the function it is defined in.
Global container > it can be used and remain visible across the different function.
Global container – in the old days it was used to store objects in mappings now we can use global variables instead.
Container Object: This object enables you to cache the values that you want to read again when you next call the same user-defined function.
From SP14 and above avoid Global Container. Use Java Section of Message Mapping to define Global Variables and to use them in your UDF’s.
Q: What is Context Handling and where do we use it?
A: Context handling you use when you want to group elements in different node from source into the single node of target with multiple occurrence of element( remove context) and if you want to do reverse of it the use function split by value .
Q: In which situation you used file to file scenario?
A:We use FTP when the file involved is not on the XI Server but on any other remote system. The senarion we use File -to-file is when u have FTP adapters on both systems (sender/receviers). These systems could be Non-SAP Systems also..or when the partner systems are small in size……
Q: In IDoc to file scinario how we will get file name as idoc number at run time for every IDoc number?
A:The idoc number we will be generated for evey idoc ..and the number will 16-digit unquie number and u can check the status of the idoc in Transaction Code “IDX5”.
Q: If we are using business serevices in file to idoc scinario where we can specify the logical system names and what is the importance of logicalsystems.
A:We specify the logical system name in “WE20”
Q:Is it possible to trasfer the data with out using IR(repository)?
A:Yes, it is possible.
Configure sender and receiver communication channels in a Business Service or System, as usual.
Create a Receiver Determination:
1. The Service has to be a valid business service or system in the ID.
2. Interface name can be anything you make up, but should be unique. In this case, it is “nonexistence_interface”.
3. Namespace name can be anything you make up or already exists. In this case, it is “http://abc.com”.
Enter a valid service for the Receiver and save in Receiver Determination.
Create Interface Determination and do following:
1. Use the same Interface name as the sender.
2. Use the same Namespace name as the sender.
3. Do NOT enter any Interface Mapping.
Create Sender and Receiver Agreements as usual.
The interface is now ready to be activated and executed. Once executed, you can examine the content of the payload in SXMB_MONI. It will contain whatever the data you sent, but you will also receive an error indicating that the message is not XML (which can be ignored).
The main points of this exercise is:
1. IR is not necessary for development of interfaces in XI.
2. In ID, any name can be used for Sender Interface and Namespace names, and they do not need to exist in IR.
3. No Mapping can be used, since the data may not be XML.
4. The Receiver Interface and Namespace names must match that of the Sender Interface and Namespace names.
5. Most importantly: the data sent thru XI does NOT have to be in XML; any data can be sent thru XI.
Q: what is the protocol used for File?
Here it depends on the where the file is located. If the file is in same Local network as XI is then NFS (Network File system is best suited) Otherwise FTP
NFS protocol when we are required to poll the files from local machines.
FTP protocol when we are required to poll the file from the FTP server which are in the remote side or outside of the firewall..
Q: What is the use of IDX2?
A: Maintain the Idoc Metadata. This is needed only by XI, and not by other SAP systems. IDX2 is needed because XI needs to construct IDoc-XML from the IDoc. No other SAP system needs to do that.
How to use Receiver Rule in PI7.1
Suppose scenario is File to IDOC. PI will receive all customer details through file and PI will send this to IDOC to different system. All customers with Country US should go to system A and all customers with Country India should go to system B and so on, so here we need determine receiver business system on value of field Country..
How we can use Receiver Rule in above scenario..
Receiver Rule step by step
Create Data type “CustomerDetails” for File sender
Create Message type ”MTCustomerDetails”
Create Context Object: “Country”
To create Context Object go to Create ObjectàInterface ObjectsàSelect Context Object and click ok create object provide name “Country” and reference type as “string”
Now create outbound service interface as following
Click on Context object in service interface to assign context object to Field from service interface.
Assign “Country” Context object in front of field “Country” as shown below you can select Context object by taking search help here..
Now create your Mapping and other objects..
ID side configuration:
Inside Configuration scenario create Receiver rule object as following
To create Receiver rule object
And now provide conditions to this same as xpath condition as following
And then select the receiver system for this condition.
As in our case if Country= India then we need to send message to SystemB as following
And so on add as many conditions as you want
Now here you can add as many conditions as you want…and can assign respected Business system.
Now after this create Receiver determination as following
Now configure your remaining object like Interface determination, sender agreement , and receiver agreement
Advantages of Receiver Rule over normal Condition based routing is as following
Executing Multiple Mapping Programs
We know the feature of interface mapping which can execute multiple mapping programs consecutively. This blog talks about the execution of multiple mapping programs in interface mapping.
When you find the complexity in mapping, with help of this feature we can divide up the mapping into multiple mapping programs. Could be helpful for beginners.
AMR (Automated Meter Reading) to SAP-ISU (Industry Solution utilities).
AMR generates fixed length files in FTP server from there PI should pick the files and converts into BAPI calls.
AMR Fixed Length text file should be converted into BAPI XML.
1. This can be done with java mapping without FCC.
2. With FCC can be done (Single message mapping)
3. Multiple Mapping programs (Java mapping and message mapping without FCC
Multiple mappings in one interface mapping:
There are two mapping programs are taken as mentioned below, these are executed in the order of top to bottom.
1.Java mapping which takes fixed length source files and generates source xml file (MT_AMR_Source).
2.Message mapping, it takes the source xml file (MT_AMR_Source) generated by Java mapping, as input, and generates BAPI xml file.
1. Java Mapping:
* @author PNemalikanti
public class AMR implements StreamTransformation
private Map param;
public void setParameter(Map param)
this.param = param;
public void execute(InputStream in ,OutputStream out)
out.write(“<?xml version =’1.0′ encoding=’UTF-8′?>”.getBytes());
BufferedReader bin = new BufferedReader(new InputStreamReader(in));
while((inLine = bin.readLine())!=null)
catch (Throwable e)
Note: Java mapping gives input source xml file to message mapping.
2. Message Mapping:(Generates BAPI XML )
Java Mapping and Message Mapping are registered here.
Source Fixed Length File:
With help of Executing multiple programs(Java Mapping and Message mapping) AMR Fixed length files are converted into BAPI_ISUPROFILE_IMPORT xml file.
Recently I was performing an upgrade for Seeburger set of adapters on SAP PI 7.0 from version 1.7 to 1.8.1 (The latest version recommened for PI 7.0). During this upgrade, we faced some issues which made me realize that a basic flaw during installation of the Seeburger suite on PI could lead to a Security breach and could provide an opportunity for Mischief (a mild word) lovers or Swindlers (a harsh word)
You might have recognized this earlier, but the couple of PI systems I observed, the Security team missed it. This promoted me to share this small but “could be relevant” issue.
One of the steps of Seeburger Installation is to create a user “seeburger” and assign the role “SAP_J2EE_ADMIN” to this user. Then it is advised to set the password of this user to “xxxxxxx” (I am not mentioning the password here as it could provoke some users to exploit it. This password is available with the installation manual). Wherever I happened to chek PI systems using Seeburger adapters, I knew there is a user “seeburger” with password “xxxxxxx” with quite good access to PI system information and configuration. I tried logging in and succeeded as this is a Dialog user. In most of the cases, a Basis consultant performing the installation doesn’t really dare to manipulate any such passwords to avoid security breach. This would mean that any developer who is part of Seeburger installations anywhere across the globe is able to access PI systems of their client with role SAP_J2EE_ADMIN. Access to this role, I believe, is not a recommended practice especially for large PI installation involving large number of PI developers.
The simple solution is to change the password as per your conventions and the Security Administrator could maintain such passwords separately. The location where this password is used is
Visual Admin -> Server -> Services -> Connector Container -> Connectors -> Connector 1.0 -> seeburger.com/com.seeburger.xi.<Module> -> Managed Connection Factory -> Properties
Change the password of key “adapterUserPassword” to the new password and Save.
I hope the Security Administrators read it before the developers!
This blog describes the load balancing mechanism of XI3.0/PI7.0 in a High Availability Environment. It provides a holistic overview of runtime load distribution for XI/PI ABAP application servers, Java server nodes, adapter engine and SAP Web Dispatcher. Not only RFC load balancing but also message flow and mapping request are discussed.
You are using XI3.0 or PI7.0. You activated different load balancing methods for different resources in the dual stack system. But you want to check if these methods are taken into affect during runtime and if the implemented load distribution strategy is correct. Probably you also want to have an on-demand load balancing by tuning some parameters during runtime.
Each application server has a capacity value for each of the services it provides (ABAP, Java). The capacity is used as an estimated value of the actual “power” of an application server. As it is safe to assume that more dialog processes and server nodes are configured on more powerful machines, this number can be seen as an approximate benchmark. The SAP Web dispatcher needs information about the capacity of a server in order to balance its workload.
The capacity of a list of all application servers can be retrieved from URL:
http://<Central instance host>:<ABAP Message Server HTTP Port>/msgserver/text/logon
(The message server port is defined by parameter ‘ms/server_port_<xx>’ and can be found via SMMS–>Goto–>Parameter.)
Here the ‘LB=xx’ following ‘DIAG’ indicates the capacity value of ABAP and ‘LB=xx’ following ‘J2EE’ indicates the capacity value of Java. In fact the numeric value after ‘LB’ of line ‘DIAG’ equals to the number of dialog work processes on each application server and the numeric value after ‘LB’ of line ‘J2EE’ equals to the number of server nodes. The Web dispatcher takes the maximum value of both values as the standard capacity setting for this application server.
Although these values exactly reflect the pre-configured system recourses (dialog work processes or server nodes), they can be changed arbitrarily on demand when system is running (thinking about some customer doesn’t want central instance to get too much load). This will be discussed in the ‘Web Dispatcher’ part.
In order to perform load balancing, the SAP Web Dispatcher periodically fetches a list of all active application servers of an SAP system. This list includes the host names as well as a static value indicating the capacity of each server (see ‘System Overall Capacity’ part). So there are majorly two factors determining the load balancing by SAP Web Dispatcher: load balancing strategy and server capacity.
You can configure the details of the load balancing strategy in the profile parameter wdisp/load_balancing_strategy. Two alternative strategies can be defined:
a. Simple Weighted Round Robin
Parameter value: simple_weighted_round_robin
Each server with capacity k receives precisely k requests in succession, before the next server takes over. This process is simple and deterministic since it contains no dynamic elements.
(This process, especially with end-to-end SSL could lead to unexpected results, if many individual servers have to process too many successive requests. For more details refer to SAP link: http://help.sap.com/saphelp_nw70/helpdata/en/5f/7a343cd46acc68e10000000a114084/frameset.htm)
b. Dynamic Weighted Round Robin
Parameter value: weighted_round_robin (default)
The load is balanced using a load factor. The server with the lowest load factor receives the next request. If a server is assigned a request, the load factor is increased in proportion to the reciprocal value of the server capacity. The load factor is apparent from the Web Administration Interface. As the load factor is constantly changing, the information about the “next preferred server” is simply a snap-shot situation.
The runtime system capacity is shown in the SAP Web Dispatcher Admin page from URL: http://<webdispatcherhost>:<webdispatcher http>/sap/wdisp/admin/default.html
Here the server capacity can be seen in the Capacity column under Monitor Server Groups. It is fetched via HTTP from the SAP Message Server. Since in XI/PI both ABAP and Java have Message Server, the parameters rdisp/mshost and ms/http_port (for the message server, this port must be configured as an HTTP port, by setting parameter ms/server_port_<xx> in the message server profile) decide which Message Server supplies the capacity information. Normally within an ABAP+Java system you have to specify the host and port of the ABAP Message Server because in this case only this has the full server information.
The ratio against servers is considered rather than the value itself. For example if application server 1 has twice capacity value as application server 2, the number of processed messages on application server 1 is approximately twice as application server 2 as well. As said before this ratio can be overwritten on demand.
**Be careful that you should only overwrite the capacity if you have already determined it needs to be changed while the system was running. In another word, the standard setting that the Web dispatcher gets from the message server is normally suitable.
There are generally three ways to overwrite the capacity value:
a. Change directly from the Web Dispatcher Admin page.
Simply right click the Capacity label. This change will take effect immediately but lost after Deb Dispatcher restart.
b. Set Web Dispatcher profile parameter.
You can overwrite the capacity value permanently by setting the profile parameter wdisp/server_<xx> in the SAP Web dispatcher profile. It has the following syntax:
wdisp/server_<xx>= NAME=<name>, LB=<capacity>
whereby <name> is the name of the instance (not the host name) and <capacity> is the capacity value of this instance. <xx> are numbers ascending from 0 (compare with Generic Profile Parameters with the Ending <xx>).
This change requires a restart of Web Dispatcher.
c. Change the location where the server capacity stores.
The URL to retrieve the list is determined by the SAP Web Dispatcher profile parameter wdisp/server_info_location that by default is /msgserver/text/logon.
You can create a file ‘info.icr’ under the same folder as Web Dispatcher and set the value of wdisp/server_info_location to the path of the ‘info.icr’ file, e.g. file://info.icr/.
The structure of the ‘info.icr’ file is as the capacity list of all application servers. For more details refer to SAP note 645130 section 3.
Here will discuss how to check the runtime RFC load balancing. The configuration of RFC load distribution by logon group is not a major concern. But the load balancing strategy should be taken into account to understand the runtime statistics. At the end of this part, a test tool “lgtst” will be introduced to test the load balancing.
Logon group concept was introduced to control receiving RFC requests. Normally these RFC calls are sent to a system via RFC destination, providing user and password or on the receiving system treated like online users logging on to a system.
In XI/PI system thousands RFC calls could occur in a very short period of time. One application server may be overloaded immediately while other servers receive no RFC calls since the status of Logon Group is checked every 5 minutes. Therefore dynamic logon load distribution was introduced with two load balancing strategy: Best Quality and Simple Round-robin (SAP Note 593058 – New RFC load balancing procedure).
Afterwards a random number was introduced since release 6.20 during simple round-robin load balancing. As a result, different RFC programs having differently sorted lists of application servers. This ensures that not all RFC programs connect to the application server in the same sequence.
As of Support Package 15 with Release 7.00 or Support Package 5 with Release 7.10, a new strategy named Weighted Round-robin procedure can be used to enhance the simple round-robin (SAP Note 1112104 – Weighted round robin procedure).
Go to SMLG and choose Msg-Srv Status Area. You can see the quality of each application server. Server with better quality always has higher value in the quality column. Refresh the view and check the quality change from time to time. If you discover a server has high quality value but always has low load, you should verify whether the correct load balancing strategy is chosen.
Double-click required logon group in the Logon Favorite Storage list. Version=1 means dynamic logon load distribution is not activated. You have to go to SE16 and change the table RZLLICLASS accordingly. Details are in SAP link:
Test Program “lgtst” is also available from SAP Service Macketplace, which can be used to check the load balancing procedure (SAP Note 64015 – Description of test program lgtst).
You can check which application server is used in sequence for each RFC call by this program on OS level:
You can also check the available information about an RFC logon group:
**If you find the load balancing does not meet your expectation, e.g. too many RFC calls on one application server, you have to review your load balancing strategy and choose the most suitable one.
You can check if load is distributed on all server nodes for certain adapter type (FTP, JDBC, JMS…) or all adapter types via link: http://<host>:<port>/MessagingSystem.
You can choose to check sent or received messages. Click into Configure Table Columns and select Node ID. Then the Node ID will be shown in the searching result. Sort the result by Node ID to get an overview of the load distributed on different server nodes.
A better way to get the idea of message load distribution in peak hour is to use Extended Filtering function if you already know your node ID. You set the timeframe for peak hour and specify the node ID. You only need to take care of the total message number on the top without displaying all the messages. This is helpful that you do not have to count thousands of messages yourself during peak hour and you can also avoid the memory issue caused by displaying too many messages.
You can dig into special adapters by selecting Connection Name on the same page.
**If you have several server nodes and find that some of them have significant less or no messages processed during certain period, you can start your troubleshooting.
It is not easy in runtime to identify mapping request distribution. Keep in mind that the mapping requests sent through Gateway to Java server processes using simple round-robin strategy. So the number of registered processes for destination AI_RUNTIME_<SID> decides the load of each server nodes during mapping runtime.
This blog doesn’t discuss the implementation of load balancing. However below documents can provide a better understanding of load balancing in XI3.0/PI7.0:
During integration projects, one might face requirement to trigger delta information like creation/change of master data out of SAP ERP.
There might be additional requirements to trigger this at scheduled interval or as a nightly job.
To fulfill the above requirement, “Change pointers” can be activated for the required IDOC’s. A background job can trigger the generated IDOC’s to PI.
Following are the applicable situations:
1. Trigger delta data
2. Batch processing
3. Timed/Nightly processing
The solution is to activate change pointers from Customizing and use report program RBDMIDOC. Separate jobs can be scheduled for separate IDOC’s.
1. Goto the following path in TCODE: SPRO.
SPRO > SAP NetWeaver > Application Server > Idoc interface/ALE > Modeling and Implementing Business Process > Master data distribution > Replication of modified data > activate change pointers for message types.
2. Select the IDOCs where change pointers are to be activated and save your settings.
3. Run transaction BD22 to delete existing Change pointers. This is required to clear the existing change pointers. If this is not done, enormous amount of IDOC might be generated during the first run.
4. Run Schedule report RBDMIDOC ( TCode: BD21)
Tip: You can check for processed IDOC’s using WE05.
Customers continue to adopt SAP NetWeaver Process Integration (SAP NetWeaver PI) 7.1 including enhancement package 1 (EHP 1) in their productive landscapes. As of December 2009 already around one third of all customers of SAP NetWeaver PI are life on either SAP NetWeaver PI 7.1 or EHP 1 for SAP NetWeaver PI 7.1. I. e. as of December 2009 more than 760 customers use SAP NetWeaver PI 7.1 (including EHP 1) productively, distributed over more than 1000 live installations of SAP NetWeaver PI 7.1 or EHP 1 for SAP NetWeaver PI 7.1.
With this strong adoption of SAP NetWeaver PI 7.1 including EHP 1 I would like to provide you here with more examples of real customer scenarios in which SAP NetWeaver PI 7.1 or EHP 1 for SAP NetWeaver PI 7.1 is used productively. As in the previous information about real customer scenarios with SAP NetWeaver PI 7.1 I would like to share with you customer examples from different industries. In the new presentation about real customer scenarios with SAP NetWeaver PI 7.1 including EHP 1 you can find seven customer examples from the following industries:
The presentation about the life customer examples includes information about the scenarios that these customers have implemented as well as the main benefits that are provided by SAP NetWeaver PI 7.1 including EHP 1 to the customers. The system landscape of our customers is heterogeneous and can consist of many SAP as well as non-SAP applications. Thus SAP NetWeaver PI 7.1 including EHP 1 is used productively for the integration of non-SAP systems with non-SAP systems, non-SAP systems with SAP applications, as well as SAP- with SAP systems. And more and more customers choose SAP NetWeaver PI 7.1 as their central and strategic integration platform and replace and migrate 3rd party middlewares to use one integration platform, namely SAP NetWeaver PI 7.1, for both SAP and non-SAP systems.
SAP NetWeaver PI 7.1 including EHP 1 is used for a broad range of business scenarios by our customers. Since SAP NetWeaver PI 7.1 is, besides SAP NetWeaver Composition Environment (CE) 7.1, essential part of SAP’s SOA (service-oriented architecture) infrastructure, customers can use SAP NetWeaver PI 7.1 to apply SOA principles. To make it more transparent, I give you here some examples about SOA principles that are commonly adopted by customers:
Troughout the presentation about productive customer scenarios with SAP NetWeaver PI 7.1 including EHP 1 you can find examples on how these customers apply those SOA principles and pave their way to a service-oriented architecture.
At the end of this blog I would like to point you to two further sources of information:
And stay tuned for the upcoming next version of SAP NetWeaver PI. At TechEd 2009 we already announced the most important planned benefits, more details will be provided soon. The ramp-up for the next version of SAP NetWeaver PI is currently planned for the second half of 2010.
People who have worked since ramp-up XI3.0 or earlier generally know the Ins & Outs of XI3.0 administration. Back then, It was an all-in-one role (all-win roles we used to call it) where the 1 man army used to install, develop, take the objects all the way through production, go-live & support. But with SAP XI/PI widely accepted as integration broker & ESB, it became the responsibility of NW Administrors aka basis teams to maintain XI/PI systems and there was a clear distinction in the roles & responsibilities of PI developer and administrator.
When messages from Adapter engine just vanish in the vaccum without reaching integration server, IDOCs don’t reach the target systems, logs & traces not active, a seasoned XI/PI consultant will be tempted to go to SXMB_ADM or IDX1/2 to check if the post-installations were performed properly. But with limited authorizations & per development process, have to raise an issue for NWAdmin to figure it out.
One such time, SLDCHECK failed and i had to wait for days but the issue was still un-resolved. I wanted to get to the roots of it and check the configuration but i didn’t have the authorization. So i started debugging SLDCHECK and i came across PIAPPLUSER password. [Generally Administrators try to keep the passwords consistent or atleast logical in the landscape. (un)luckily, they had the same password for PISUPER]. I jumped out in joy like a kid who found a bag of candies hidden right under his desk. I hacked-in, found the issue and requested NWAdmin to check this specific configuration and they fixed it.
Hacking in PI 7.0
LCR_LIST_BUSINESS_SYSTEMS uses the configuration maintained in SLDAPICUST (TCode) to access the SLD and get the list of Business Systems. For local SLD installations, it uses SLDAPIUSER. But for Central SLD’s, SAP recommends to replace SLDAPIUSER with PIAPPLUSER in SLDAPICUST. (as per configuration & post installation guides).
Refer to Section 2.4 Basic SAP System Parameters & Section 5.17.1 Performing PI-Specific Steps for SLD Configuration for more details on Maintaining SLD connection parameters.
Figure 1.0 SLDAPCUST Configuration in PI System
LCR_LIST_BUSINESS_SYSTEMS function Module can be hacked to get the PIAPPLUSER password& set a Breakpoint at line 67.
Figure 2. Breakpoint in LCR_LIST_BUSINESS_SYSTEMS
create object accessor.
accessor->set_tracelevel( tracelevel ).
Figure 3.0 PIAPPLUSER password hacked
Caution: Changing configurations by using the Hacked users/passwords is strongly discouraged.
Word to SAP: SAP can take it as a positive feedback & release a note to enrypt the password.
Links of Help
Change PI Service user passwords with caution
SAP NoteNo: 999962 : PI 7.10: Change passwords of PI service users
SAP NoteNo: 936093 : XI 7.0 : Changing the passwords of XI service users
A lot of my projects recently where all about workflow. HCM, the Human Capital Management module with ESS, MSS and Universal Worklist is an example. SRM and Purchasing is another, often requested project. And if you look at PI-based projects you can see a rise of applications that uses BPM and „human intervention” the dramatic sounding term for BPM-based workflow where user interact with the Process Integration flows.
I noticed the need to discuss the impact of all these “little” projects in broader terms with the customers, to put these activities for their companies in perspective in terms of direction and investment. And I thought I would like to share a summary of thoughts with the community.
On most companies, the so called „silo applications”, the optimization of classic modules like MM, SD or PP is maturing and coming to a certain end. New applications have a shorter, more direct focus (i.e. „Apply for Vacation” rather than „Time Management”) and in this focus, you often have the requirement for human interaction.
When discussing these project with customers, I advice them not to see only the distinctive single project, but also the overall integration of all these single projects into a Process Design Strategy
Most of these new projects have a Web Dynpro ABAP based frontend component. WD ABAP has quickly proven to be a well suited component to create quick “rich client” based user applications. When there is an Enterprise Portal involved, the integration of all applications is with workflow component is usually done by using Universal Worklist. From the user view, all process interactions starts and ends in a UWL.
From a system perspective, SAP Process Integration (PI) with their BPM-component can run the technical side of the integration extremely well.
If you look at the situation of most companies, business process across boundaries with human interaction is a logical step. Process optimization, which should always be accountable for at the end of an IT project, leads to the optimization of a series of single events, coupled into a whole process. And this is exactly the business reason for small, fast and people driven software design.
If the process is not integrating, not eliminating media breaches and truly people centric ( the real “ease of use”) , you will gain no benefits. This goal should be always present in these projects.
When you have a lot of processes, distinctive Web Dynpro-Applications and Web Services used for interaction with SAP backends, the more daunting task is to manage all these processes. The question is, how are all these development activities organized?
This is where the PI Enterprise Service Repository comes into play. The Enterprise Repository is the central place to store information about data, processes, interfaces, locations and process descriptions. The Enterprise Repository and the Service Registry for Web Services is a powerful extension of the developing process itself.
Workflow based processes should be part of a strategic movement in the enterprise towards a process oriented, modeled business design. This is for sure an evolution and no big bang project. Experience needs to be made by all project members. Developer, business owner and technical implementation needs to slowly grow into a central element for all future projects.
Workflow will integrate transactions, interfaces and people inside and outside of the company to overall business processes. This is the real new application in the next decade and you will find no other platform with this level of integration. The sum of all these components and the art of mastering it makes the new IT architecture for the next decade.
The Background Story:
The subject is of content conversion in the File Receiver Adapter. For those who have read the SAP help; it states that the adapter expects the XML to be of the below format;
So ideally, the adapter doesnt expect an XML with hierarchies???
The expected output file format (taking a real time scenario from a utilities industry) needs to be as follows;
HEADER – Fields of Header (1-1)
(TRANSACTION – Fields of Transaction
METERPOINT – Fields of Meter Point
ASSET – Fields of Asset
REGISTRATION – Fields of Registration
READING – Fields of Reading)
TRAILER – Fields of Trailer (1-1)
A sample file output;
NOTE: TRANS to READG segments will be a repeating set
When we start the design, to suit the file content conversion the ideal Data Type would be as below;
The above exactly fits the expected XML format for the File adpater to perform content conversion.
Using the above message structure in the mapping will reveal a glitch.
You would notice that all the TRANS, METER etc nodes are now collated together. But this is not what we want since we need first TRANS followed by the first METER and so on.
The above is due to a context issue. The truth is, I have never been able to figure out a context mapping to handle the above. Or maybe even if I might, would it prove to to be too much of an effort?
So this is what I propose;
Create your Data type as follows;
Carry on with your mapping. This time since we have introduced the BODY_NODE, it will ensure that the context is maintained.
The output of the mapping will now be;
Everything is as expected. Only thing pending is the File content conversion to create the output file.
NODECEPTION aka Node Deception:
To have the XML prepared as expected by the File adapter, we will use a simple trick I have come to call NODECEPTION = NODE + DECEPTION.
Because, here we will trick (DECEPTION) the File adapter by removing the BODY_NODE node (NODE) from the XML and provide it with the structure as expected for content conversion.
Use the java code as per this link.
The Java code will remove all instances of the BODY_NODE tag from the XML.
Note: For PI 7.1, I have used parameterization to make the code dynamic and hence reusable. If using XI 3.0 or PI 7.0, you can remove the parameterization snippet from the code.
Add the java mapping in your operation mapping. You will have the java mapping executed after the original graphical mapping.
The output would be as below;
Now that we have the resulting XML as above, it’s simple FCC parameters that can create the output file.
The Next Steps:
The above ‘Nodeception’ method is what I found to be the easiest solution in such a scenario. Do you have a better and easier way? Or do you know how to manipulate the FCC parameters to handle hierarchy?
If you do, I request you to document your solution in this Wiki.