How to customize Java Virtual Machine Settings in Oracle WebLogic Server 12 on OS Linux / UNIX

change_on_install ... an oracle blog

In order to achieve the best performance of the application or to avoid performance bottlenecks (and “OutOfMemory” problems) you need to tune your Java Virtual Machine. After you install the WebLogic Server and create a Domain (WebLogic or SOA or Forms or OBIEE etc.) you may set some properties such as Java “Heap size”, tune Java “Garbage Collection” and WebLogic Server start options.

Tune JVM settings

You can set the Variable USER_MEM_ARGS for the Admin Server and each Managed Server to tune the JVM Parameters. For Example:

USER_MEM_ARGS="-Xms1g -Xmx3g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:NewSize=1g"

Ursprünglichen Post anzeigen noch 532 Wörter

Veröffentlicht unter Oracle FMW, Oracle SOA Suite, SOA, Weblogic Server | Verschlagwortet mit , , , , , , | Kommentar hinterlassen

Servicebus 12c: Using configuration files for customizing service deployments

Introduction

In my last projects I faced different challenges in the area of how to manage environment-specific artefact configurations that need to be adjusted during rollout of a particular service component. As a result from that, I decided to write down my experiences and solution approaches regarding the corresponding topics. This article is the starting point for a short blog series dealing with different challenges in this area, where the following topics will be covered:

  • General: Using configuration files for customizing service deployments
  • Adjust environment-specific configurations at deployment time
  • Evaluating environment-specific configurations at runtime
  • Security: Approaches for credentials management

Configuration files in Servicebus

During its lifecycle a specific service component needs to pass different quality gates to ensure quality, correctness and stability before it will be approved for production rollout. For that reason deployments to different environments have to be done. This usually means changes in configurations like service endpoints or timeout parameters, which must be possible without changing a components implementation and rebuilding the component. In Oracle Servicebus this can be achieved by using so called configuration files (aka customization files in OSB 11g). With those it is possible to consistently change service properties and configurations for implementation artefacts like proxy services, service pipelines and business services. Within a configuration file you define actions like replace or search and replace to adjust the corresponding configurations.

Generating a basic configuration file can be done using Servicebus console.

Create_Customization_File

As it can be seen in the screenshot above the services that should be adjusted by the configuration file can be selected. Doing so, a file is generated containing replace rules for all configurations that are adjustable.

One thing to notice, when working with Servicebus configuration files is that actions, which are used to replace parameters may throw errors at deployment time, if services are referenced explicitly by name and the referenced services are:

  • not part of the current deployment,
  • already deployed on the server yet.

Approaches for using configuration files

Depending on the requirements, it can be chosen between a local and a global approach while working with configuration files. The main differences regarding these approaches is the level of granularity in which adjustments are possible, the way the configuration file is defined and the way how configuration files are organized.

Central configuration management using property files

A common detail for both approaches: The configuration file is used in a template style, where specific configurations are expressed as properties, like hostnames or ports.The values for the properties are defined in environment-specific property files. At deployment time, the property file for the target environment is used for replacing the corresponding properties in the configuration file template. This can for example be done by using the maven replacer plugin. Using that approach ensures that only one configuration file needs to be maintained, instead of one per environment.

A property file may look like the following:

sbserver.clusteraddress=sbcluster.test
helloworld.retryCount=0

Local configuration file approach

A local approach demands an explicit configuration file per Servicebus project. So in case a new service is implemented, a new configuration file needs to be created, which should be explicitly used for this service. In addition the central property files have to be adjusted to contain the newly created properties.

Because every project has its own configuration file, the approach is very powerful regarding the granularity of how configuration parameter replacements can be defined, as the replacements are clearly related to a specific service. One difficulty of this approach is that an explicit configuration file needs to be maintained per service, which can become a bit messy.

In the following an example for a fine grained, service-specific action is shown:

<cus:customization xsi:type="cus:EnvValueActionsCustomizationType">
 <cus:description/>
 <cus:owners>
 <xt:owner>
 <xt:type>BusinessService</xt:type>
 <xt:path>service/v1/HelloWorldService</xt:path>
 </xt:owner>
 </cus:owners>
<cus:actions>
<xt:replace>
 <xt:envValueType>Service URI</xt:envValueType>
 <xt:location>0</xt:location>
 <xt:value xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema">
 http://${sbserver.clusteraddress}/HelloWorldSvc
 </xt:value>
</xt:replace>
<xt:replace>
 <xt:envValueType>Service URI Table</xt:envValueType>
 <xt:value xsi:type="tran:URITableType" xmlns:tran="http://www.bea.com/wli/sb/transports">
 <tran:tableElement>
 <tran:URI>http://${sbserver.clusteraddress}/HelloWorldSvc</tran:URI>
 <tran:weight>0</tran:weight>
 </tran:tableElement>
 </xt:value>
</xt:replace>
<xt:replace>
 <xt:envValueType>Service Retry Count</xt:envValueType>
 <xt:value xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema">
 ${helloworld.retryCount}
 </xt:value>
 </xt:replace>
</cus:actions>
</cus:customization>

Global configuration file approach

Using a global approach, only one central configuration fileis needed, which defines the replacements for all services. So if a new service is created, only the central configuration file and the property files need to be adjusted, to replace the respective configurations of the new service.

As the configuration file is global and used every time a service is deployed, there are limitations regarding the actions to be used and so also regarding the configuration parameters that can be replaced. It works without any problems, when all actions used do not reference a service by name explicitly. The actions should only specify the type of service, which should be touched (proxy or business service). So those replacement rules are generic and very flexible.

Usually configuration files defined in this style are clear and comprehensible, but limited regarding the granularity of the replacements. It is perfectly fine and easy to manipulate service endpoints (hostname, ports, URLs), but properties like HTTP connection timeout or other numeric configuration parameters are hard to replace using global configuration files, because usually those configurations are specific to a explicit service.

An example for a generic action is a search and replace, which may look like the following:

<cus:customization xsi:type="cus:FindAndReplaceCustomizationType">       
  <cus:description/>
  <cus:query>
    <xt:resourceTypes>BusinessService</xt:resourceTypes>
    <xt:includeOnlyModifiedResources>true</xt:includeOnlyModifiedResources>
    <xt:searchString>http://devserver.com:7101</xt:searchString>
    <xt:isCompleteMatch>false</xt:isCompleteMatch>
  </cus:query>
  <cus:replacement>https://${sbserver.clusteraddress}</cus:replacement>
</cus:customization>

Conclusion

The decision, if the local or the global approach is more applicable, should be clarified depending on the requirements in a particular project. In most of the projects, in which I was involved, the global approach described in this article was adequate. Usually it is sufficient to just replace information regarding the service endpoints. But if more control regarding replacements is needed, the local approach is definitely the one to prefer.

In the next article of this series, it will be shown, how to deal with configurations, which cannot be directly manipulated with the configuration files. An example for that is that depending on the current environment a different API key is needed in the header of the HTTP request.

Veröffentlicht unter Oracle FMW, Oracle Service Bus, Oracle SOA Suite, SOA | Verschlagwortet mit , , , , , | 1 Kommentar

Camunda – Jetzt auch eine Zero-code BPM Lösung?

Camunda wirbt ja schon sehr lange damit, dass Ihre Lösung die richtigen Dinge umsetzt und die Vorteile von Individualentwicklung mit denen sog. BPM-Suites verbindet. BPM-Suites versprechen sehr häufig eine Automatisierung der Geschäftsprozesse ohne IT Unterstützung, also zero-code BPM oder wie camunda auch sagt „Bullshit BPM“.

In der Praxis kennt man das ja, der Fachabteilung ist die IT häufig noch ein Rätsel. Umsetzung neuer Funktionen erscheinen schwierig und benötigen viel Zeit. Doch der Fachbereich möchte die Umsetzung der Anforderungen immer zügiger und der Wunsch nach Selbstbestimmung wird laut. Es wird vom Fachbereich also nach Lösungen gesucht, BPM als Synonym für BPM-Suites und damit zero-code scheinen die richtige Wahl zu sein.

Anbieter werden eingeladen und evaluiert, auf den ersten Blick sieht es nach der ultimativen Lösung aus. Leider scheitern viele dieser Ansätze. Die Lösungen können die Erwartungshaltungen häufig nicht erfüllen und man stößt an vielen Stellen an die Grenzen der Möglichkeiten. Die teure Suite scheint sich also nicht bezahlt zu machen.

Camunda hat bisher einen anderen Ansatz vertreten und sich eben auf die Ausführungsebene (die Process Engine) konzentriert. Zur Umsetzung von Geschäftsprozessen wird die IT also weiterhin benötigt, die für die Fachabteilung eigentlich wichtigen Punkte werden dabei adressiert. Transparenz (ein gemeinsames BPMN Modell) und Umsetzungsgeschwindigkeit (die Engine bringt schon vieles mit und entlastet den Entwickler).

So weit so gut, doch was ist jetzt mit camunda los? Wird camunda auch zu einer zero-code BPM Lösung?

Seit der neuen Version 7.5, die am 31.05.2016 veröffentlicht wurde, ist nun ein neues Feature enthalten. Die sog. „Element Templates“.

Die Element Templates ermöglichen es, durch Vorfertigung von parametrisierten Java-Klassen und einer XML-Beschreibung, dass auch Nicht-Entwickler bzw. fachbereichsnahe Entwickler (citizen developers) durch Konfiguration über den camunda Modeller viele Dinge selbst erledigen können. Eine hohe Wiederverwendung der so erstellten Java-Klassen wird dadurch außerdem gewährleistet.

Ein gutes Anwendungsszenario dürften wohl Emails sein, die aus Prozessen heraus versendet werden sollen. Für dieses Beispiel möchte ich auf den camunda Blog verlinken: https://blog.camunda.org/post/2016/05/camunda-modeler-element-templates/

Weitere Anwendungsszenarien könnten sein:

  • Templates für System-Connectoren (z.B. ERP, CMS, DMS..)
  • Customizing von Workflows/Prozessen (z.B. für Mandanten)
  • Bereitstellung einer anpassbaren Standardsoftware
  • Uvm.

Element Templates können auf viele BPMN Typen angewandt werden, darunter aktuell „Activities“, „Sequence Flow“ (conditions) und „Process“. Durch die breite Basis, freue ich mich schon auf die vielen Einsätze in der Praxis.

Mit dem neuen Feature hält camunda also eine sinnvolle Ergänzung bereit und bleibt seiner Linie, die richtigen Dinge zu tun, auch weiterhin treu.
Für mich persönlich ist dieses Feature eines der Wichtigsten aus dem neuen Release 7.5.

 

Veröffentlicht unter BPM, camunda BPM, German | Kommentar hinterlassen

Migrating Business Processes using Camundas Version Migration Plugin

In a recent project, I was faced with several versions of a process model and a lot of instances for each version. This increased the work of keeping every case in mind for example when it came to new backend developments (which are centralized and have to work for all versions). I had the idea of simply migrating all older instances to the latest version to keep life simple.
I checked out Camundas Version Migration Plugin for Cockpit because I was hoping to get some order in the mess. Here is what I did.

First of all, I installed the plugin. For that, I cloned the project and simply built the jar using mvn install. After that I found the jar in the target folder.
I put that jar into the lib folder of camunda on my tomcat (../apache-tomcat-7.0.62/webapps/camunda/WEB-INF/lib/) and started the server. The cockpit plugin appears as a newly added tab in instance view, see picture below.

8

 

If you havent had a strange feeling about version migration in your stomache already, you will have it now, reading „you are playing with fire!“. I strongly recommend reading the linked Risks and Limitations section, you can find on git.

Brave as I am, I started playing with fire.

My first step was to deploy a simple one-usertask-process and to start several instances for migration:
1.png

Done that, I slightly changed the process model and deployed it in a version 2. I just added another usertask. After deployment, I started the migration:
2

After migration, I was directly transferred into the model view of my V2, seeing the successfully deployed instance. So far so good.
3

Now, I improved the model by adding a check on a process variable which did not exist in previous versions:
6
Well, migration worked, but as you can imagine, after completing usertask 2, I was left with this message:
4
Don’t panic! This is not an migration error, but a normal process-model-isn’t-very-sophisticated-error. The instance just rolled back to the last transaction – usertask 2. After adding the demanded variable (simply use cockpit/tasklist) the process instance ran just fine.

As you might have noticed, I also added a service task, calling a delegate. I even used that same delegate as a execution listener on usertask 2 (event type: end) where the activity/token was waiting atm. After migrating instances to that version I successfully tested that delegate-call as well.

Then I intentionally broke the recommended limitations and tried to migrate an instance, waiting at usertask 1 to a version not containing any activity going by UserTask_1. As planned it did not work:
8
7
But hey! My instance was still not burning. The migration simply was not successfull. The instance still lived in its original version.

I am keeping on playing around with business process migration and will share new knowledge. Until then:

Short conclusion:

  • The subject of process migration is more stable than I thought
  • Adding stuff won’t break the migration (and /or instance), as long existing activities can be found in the new version as well.
  • If you mess up, you are beeing warned, the migration does not go on (no guarantee :-))
  • If you did not think about needed variables in later switches, the instance will just roll back as we are used to it (no guarantee :-)).
  • The Plug-In lets you migrate only on instance level. That means a lot of clicking but also more security.
  • Carefully check the delta between an older version and another. Find whats new and add that. Use API, Cockpit, Tasklist for variables
  • Carefully check if it is better to migrate all older versions (V1, V2) directly to the latest (V3) or to migrate from version to version (V1 -> V2 -> V3).
  • Always keep that the examples above are pretty basic. You should always test production migration with similar complexity (e.g. number of activities), before pushing the button.

Before I am going to migrate instances of productive processes, I will carefully investigate them looking for existing activities (including boundary events), variables and so on.
I will test the constellations on other stages first.
I will only do that for instances of currently implemented processes because I fear the risk of overlooking / missing details of older implementations… until customer forces me:-)

Happy migrating,

Stefan

Veröffentlicht unter Uncategorized | Kommentar hinterlassen

First experience with AWS IoT – Amazon IoT Hackathon

I recently had the opportunity to visit the Amazon IoT Hackathon in Munich. I did expect a lot from the event and I was not disappointed. But more on the event later, lets start with a brief introduction to the AWS IoT cloud.

Amazon released its IoT Cloud at the very end of last year. Its main purpose is to handle device management and communication from and to the devices (aka „Things“). Its architecture is shown in the next Figure.

aws-iot-overview

The communication between the devices and the cloud is primarly based on the MQTT  protocol. The devices publish state changes and custom events to MQTT topics and can subscribe to another topic so that they can be notified by the cloud in case of desired state changes (more on this later).

Before the device first communicates with the cloud, a certificate needs to be created for the device. Furthermore, a policy needs to be attached to the certificate, defining the rights of the device (enabling it to post messages to topics or to subscribe to topics). A policy for testing IoT applications could be like:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [ "iot:*" ],
      "Resource": [ "*" ]
    }
  ]
}

Here we see the first integration within the Amazon Cloud: policies are standard IAM policies as known from AWS Identity and Access Management.

Speaking of integration with other Amazon products: rules can be defined within the AWS IoT cloud, which can then forward the message to various other Amazon Cloud products, including S3 for storing the message, Kinesis for stream processing, Lambda for executing custom code on receipt of the message, etc. Rules have an SQL syntax like this:

SELECT JSonAttributes FROM MQTT topics WHERE FilterConditions based on the JSonAttributes

so if you have a device submitting temperature values every second to the topic  tempSensorData and you want to be notified via email if the temperature is above 40 degrees, you can create a rule

SELECT temp FROM tempSensorData WHERE temp > 40

and configure an action to send a message to SNS (simple notification service) which will send out the mail.

As you might have noted, however, rules only act on single messages, if aggregation of messages is required, the messages need to be forwarded to Kinesis and the resulting stream needs to be processed.

Devices can also have state information – for example an LED can send its current color to the cloud. This information can be stored in device shadows. These device shadows are always available, even if the real device is not connected, so other applications can read the last state of the device. Applications can also update the state of the device shadow (asking the LED to turn green, for example). A state is always made up of a desired and a reported value.

You can change the desired value in the cloud and this change will be propagated to the real device – the message containing the delta to the previous state is sent to an MQTT topic the device has subscribed to. Once the device has completed the state change, it will report back the current state in the reported attribute. From this moment on, the cloud will know that the change is processed by the device and the device shadow is in sync again. So if you want to change the color of the LED to green, you can change the attribute of the device shadow to

{
  "desired": {"color":"green"},
  "reported": {"color":"red"}
}

Once the color is set by the device, it will report back:

{
  "desired": null,
  "reported": {"color":"green"}
}

There is also a registry component for storing metadata about the devices – device location or manufacturer for example. This information can be used to query a set of devices – for example find all the devices in Munich.

You can do all these activities from the AWS Console but Amazon has also built an API do administrative stuff programmatically (if the appropriate policies enable you to do these). In order to speed up connecting devices to the cloud, Amazon has also released client SDKs which you can use on your device to connect it to the cloud – currently C, JavaScript and Arduino Yún libraries are available.

Now lets back to the event itself:
we started with some presentations where the main concepts of AWS IoT were explained. We also had a short introduction from Intel for the hardware part of the Hackathon. Later on, we started the more interesting part: we have formed teams and each team has received an Intel Edison board with some sensors and a small display attached.

edison

Each team had to build something with the Edison and the AWS products. Time limit was ca. 4 hours. A sample code was provided, which read out all sensor values and sent them to the cloud every second. In the end, each team had to present their idea and vote for the  best idea/solution presented.

Our team members had little to no experience with AWS products so it was quite a  challenge for us to create something useful within this short time period. We wanted to build something which not only sends data to the cloud but also receives data from it (in order to test the communication in both directions). After a short discussion, we had the idea: we want to build a barkeeper training application.

Our idea was: the Edison is attached to a shaker and with the help of the accelerometer the power of shaking is measured and submitted to the cloud. The values are aggregated there and after a minute a text is sent to display of the trainer (in our case, the display on the Edison) if the shaker was shaked correctly. A button on the bottom of the shaker (in our case, on the Edison) should notice that the shaker is put back on the table and reset the text message.

Creating a certificate, installing it and starting the Edison was an easy task. Sensor  messages started arriving in the cloud immediately. Creating a rule and forwarding the messages to Kinesis (for aggregating them) was also no problem.

For the aggregation itself we did not find a solution in the small time window. Apparently this is not so easy to do with the current Cloud offering of Amazon, custom code is inevitable here. Amazon has, however, already noticed that there is a missing component here and started developing Kinesis Analytics which should simplify the aggregation of events in the future.

Instead of analysing the stream directly, we have created a Kinesis Firehose rule to buffer events for a minute, then write the aggregated data to S3 in a csv format. Setting up a  trigger in S3 which would fire if a new file has been written and start a Lambda script for calculating if the shaking was OK was very straightforward as well.

This way, we have evaluated if the shaker was shaked enough for a minute (although we could not use a sliding window for evaluating data of the last minute because of the lack of stream processing). Next up was to send back a text message to the device. For that, we needed to update the device shadow. It took us quite some time to figure out how to use the SDK to do that but in the end we suceeded in creating a Lambda script updating the device shadow, thus sending data back to the device.

The last task was also an easy one, reacting on a button push to invoke the previously created Lambda script and send the „reset“ text. For this, we have created another rule, which invoked Lambda.

It was interesting to see how easy it is to start with the AWS IoT cloud and configure the first rules. We have noticed that if you can use AWS built-in integrations (from which you do have quite some), you have an easy job configuring your integration. Once you need something more complex, you can use custom code (for example in a Lambda script) but for that some practice and knowledge of the AWS APIs is of advantage. What we really missed was an easy way to aggregate IoT messages and react on the aggregated results – but, as said earlier, Amazon is working on this already.

All in all, it was great to see how multiple AWS components work seamlessly together and how easy it is to start working with AWS. At the end of the day, our solution was voted 2nd of ca. 15 teams, we just cannot complain about this result!

Summarizing my impressions:

  • Amazon AWS (and the IoT Cloud) is easy to start with, you can set up sample integration within minutes.
  • For more complex integrations custom code is needed. Experience is very helpful if you are planning to do that.
  • There is a huge list of available Amazon Cloud products, with lots of different cost structures. In order to find the best combination of products for your integration needs, some experience is needed, too.
  • In one sentence: Amazon AWS is easy to start with, challenging to master.
Veröffentlicht unter Cloud Computing, English, IoT | Verschlagwortet mit | Kommentar hinterlassen

FredBet – Ein Tippspiel auf dem Weg in die Cloud

Nur noch ein Tag und die Fußball-Europameisterschaft öffnet ihre Tore in Frankreich. Wenn auch nur alle zwei bzw. vier Jahre (EM bzw. WM), so ist das gemeinsame Wetteifern unter Freunden und Kollegen ein großer Spaß. Mit Bier und Grillwurst bewaffnet fiebert man dem Weiterkommen der deutschen Mannschaft bis zum Schlusspfiff entgegen. Um dem Wetteifern noch mehr Nachdruck zu verleihen fehlt es nur noch an einem kleinen Tippspiel, bei dem man seine Kenntnisse über die vermeintlich bessere Mannschaft unter Beweis stellen kann.

Der Artikel FredBet – Ein Tippspiel auf dem Weg in die Cloud beschreibt, wie das für die Fußball-Europameisterschaft 2016 entstandene Tippspiel entstanden ist und in der Docker Cloud betrieben werden kann.

Veröffentlicht unter cloud, Cloud Computing, Java, Software, Web Development | Kommentar hinterlassen

Auf dem Weg von Continuous Integration zu Continuous Delivery

Bei der Einführung von Continuous Delivery im Unternnehmen muss man sich zu einigen Themen Gedanken machen, die es zu berücksichtigen gilt. In dem Artikel Auf dem Weg von Continuous Integration zu Continuous Delivery werden die dafür wichtigen Themen und Herausforderungen betrachtet.

Veröffentlicht unter Cloud Computing, Java, Web Development | Verschlagwortet mit | Kommentar hinterlassen