Softwarewartung meets Microservices

Wenn eine Applikation schon mehrere Jahre im Produktionsbetrieb ist, dann haben Fehlerbehebungen und die vielen Erweiterungen, die im Laufe der Zeit sicherlich vorgenommen wurden, gegebenenfalls ihre Spuren hinterlassen. Dies kann sich durch eine Verschlechterung der Wartbarkeit bemerkbar machen. Oft wurde die Applikation in einem Projektteam entwickelt und dann an ein Wartungsteam übergeben, was naturgemäß zu einem mehr oder weniger großen Verlust von Wissen führt. Je nach Umfang des Wissensverlusts trägt auch dieser zur Verschlechterung der Wartbarkeit bei. Viele weitere Faktoren sind denkbar.

Die Verbesserung der Testbarkeit kann oft dem Wartungsteam dabei helfen, die Wartbarkeit zu erhöhen.
Doch welche Arten von Tests sollten vorhanden sein? In welcher Quantität? Was kann von aktuellen Themen der Softwareentwicklung adaptiert werden?
Diese oder noch weitere Fragen entstehen in solch einer Situation.

Das Thema Testbarkeit wird auch im Kontext von Microservices-Architekturen diskutiert. Mögliche Antworten  auf die oben stehenden Fragen liefert ein Artikel auf der Internetseite von Martin Fowler.
Zu Beginn wird die Microservices-Architektur definiert und dann die verschiedenen Teststrategien vorgestellt.

Teststrategien im Überblick: http://martinfowler.com/articles/microservice-testing/#conclusion-summary


Unit tests: exercise the smallest pieces of testable software in the application to determine whether they behave as expected.
– Integration tests : verify the communication paths and interactions between components to detect interface defects.
Component tests: limit the scope of the exercised software to a portion of the system under test, manipulating the system through internal code interfaces and using test doubles to isolate the code under test from other components.
Contract tests: verify interactions at the boundary of an external service asserting that it meets the contract expected by a consuming service.
End-to-end tests: verify that a system meets external requirements and achieves its goals, testing the entire system, from end to end.

Fazit: Martin Fowler‘s Artikel liefert mögliche Teststrategien, die sicherlich im Wartungskontext älterer Applikationen nützlich sind, aber auch weitere Ideen für einen bevorstehenden  Umbau zur Verbesserung der Testbarkeit.

Veröffentlicht unter ALM, Architecture, German, Java, Quality, Software | Verschlagwortet mit , , , , , , , | Kommentar hinterlassen

Rückblick zur Camunda 7.4 Release Roadshow

Am 22. Januar fand in Düsseldorf mit vier spannenden Vorträgen um die Themen Camunda BPM und DMN die Camunda BPM 7.4 Release Roadshow statt. Unser Partner Camunda präsentierte hier die neue Version seiner BPM-Plattform, die seit Anfang November zum Download verfügbar ist. Wir von OPITZ CONSULTING waren mit einem Praxisbeitrag beteiligt.

Jakob Freund stellte in einer Live-Demo den neuen Camunda Modeler vor. Das Tool ermöglicht es Fachanwendern und Entwicklern, BPMN-Prozesse und DMN-Regelwerke zu modellieren und ausführbar zu machen.
Michael Ferber führte das Publikum dann in DMN, einen relativ jungen OMG-Standard zur Modellierung und Ausführung von Geschäftsregeln ein. Er demonstrierte dabei auch die Modellierung von Business Rules mit dem neuen Modeler.
Im Anschluss daran zeigte Bernd Rücker in einer Live-Hacking-Demo, wie die zuvor erstellten Modelle in der BPM-Plattform deployed und ausgeführt werden können.
Die Vortragsreihe wurde mit einem Projektbericht von Halil Hancioglu und Christoph Ortmann von OPITZ CONSULTING abgeschlossen. Darin zeigten wir, wie Knowledge Worker einer Versicherung durch Case Management und Business Rules bei der Schadensbearbeitung unterstützt werden können.

Insgesamt war es ein gelungener Vormittag, der einen guten Einblick in die Möglichkeiten der neuen Version von Camunda BPM vermittelt hat. Mit der Unterstützung von DMN wird ein wichtiger Schritt getan, die Zusammenarbeit von Fachseite und Entwicklung bei der Umsetzung von Business Rules zu verbessern. Der neue Modeler macht einen guten Eindruck und unterstützt bereits die wichtigsten Features im Umgang mit BPMN-Prozessen und DMN-Regelwerken.

Veröffentlicht unter BPM, camunda BPM | Kommentar hinterlassen

Beitrag in exklusiver Blog-Serie: Neue Flexibilität mit Forms 12c

Seit Oktober 2015 ist das neue Forms 12c draußen und bringt eine Menge neuer Features mit. Leider sind diese Neuerungen immer noch nicht ordentlich dokumentiert und in keiner Präsentation zugänglich. Zeit also, sich damit zu beschäftigen und diese herauszufinden! Ein paar Features hat der Oracle Forms Product Manager in Webinaren und Präsentationen vorgestellt, andere sind in der Online Hilfe zu finden und der Rest schlummert noch im Verborgenen.

Da ich persönlich dieses Tool sehr liebe und mich damit gerne auseinandersetze, habe ich die Veröffentlichungshistorie im Vorfeld schon in etlichen Webinaren mit dem Oracle Forms Product Manager gemeinsam mit Mia Urman von Auraplayer verfolgt. Als dann im Oktober die News über die neue Version 12 draußen waren, ging es direkt los mit dem Installieren einer VM und viel, viel Ausprobieren.

Über meine gesammelten Erfahrungen und die neuesten Infos von Oracle berichte ich regelmäßig in meinem persönlichen Blog und verfolge außerdem diverse andere Quellen, die sich mit dem Thema beschäftigen. Vor kurzem kam dann der Aufruf von Auraplayer zu Gastbeiträgen für ein Whitepaper der Forms Community, die nach und nach online gestellt werden sollen – Sozusagen eine Sammlung von Tipps und Tricks von Anwendern für die Forms Community weltweit.

Ich fühlte mich also berufen und habe einen Beitrag eingereicht, der jetzt ganz aktuell als erster der Serie unter folgendem Link veröffentlicht worden ist. Weitere Einträge werden wohl bald folgen. Mein Post beschäftigt sich mit neuen Wegen, über die eine Applikation unter Forms 12c laufen kann. Hintergrund sind Ankündigungen von Microsoft und Google, dass neue Browserversionen das Java-Applet nicht mehr unterstützen und somit keine Forms Applikationen mehr im Browser laufen werden. Darauf musste Oracle reagieren, um mit der neuen Version wettbewerbsfähig zu bleiben.

Zum Abschluss möchte ich noch sagen, dass mit der neuen Forms-Version scheinbar ein kleiner Hype ausgelöst worden ist, sei es in Richtung der Modernisierung von Applikationen oder in Bezug auf die Wahrnehmung von Forms in der Außenwelt. Das Thema ist momentan ein fester Bestandteil diverser DOAG Veranstaltungen, anderer Oracle Konferenzen und bald auch ein Hauptschwerpunkt des kommenden DOAG DevCamps in Bonn, bei dem OC stark vertreten sein wird.

Veröffentlicht unter Configuration, Java, Oracle Forms | Kommentar hinterlassen

Setting up a own CA for the enterprise

Inside an enterprise there are a lot of machines communicating with each other. It is necessary to keep these communications secure and private. This can be achieved through encryption.

In the enterprise SOA the most important protocol is HTTP. The encrypted version is HTTPS and needs at least one certificate. The certificate is the host certificate of the server and must be trusted on client side. For details have a look at this post.

The needed certificate can be bought with a yearly fee of some euros from an official certificate authority. An other possibility is to just use the certificates generated by the systems themselves (‘snakeoil’ certificate).

But if the enterprise needs many certificates a better solution is to set up a own certificate authority for the enterprise.

Don’t underestimate the effort for having your own CA. The effort is not in first place for setting up the CA or generating the certificates. Much more time is needed for the education of the employees handling the CA, the organizational processes and the documentation of issued certificates, keys and their lifetime. At official certificate authorities there are hundreds of folders describing how the employees have to act in different situations, who has the permissions to do what, who can substitute key persons and so on. Even if you don’t need such complex processes there should be some definitions about the confidentiality of the CA and the generated certificates and substations for the employee handling the CA.

I will describe how to do the serveral actions with openssl on a linux machine. Of course a current version of openssl should be used.

openssl.cnf

The first step for setting up the CA is to create or modify the file openssl.cnf. When I started to set up the CA I was surprised how difficult to understand and bad the documentation of this file was. In fact I was missing an example of the file with current settings and without everything else which is not really needed. The extension system of the openssl.cnf file makes it quite difficult to understand. If some reader has suggestions to improve it, I would be very appreciated.

1 # 2 # OpenSSL example configuration file. 3 # This is mostly being used for generation of certificate requests. 4 # 5 6 # This definition stops the following lines choking if HOME isn't 7 # defined. 8 HOME = . 9 RANDFILE = $ENV::HOME/.rnd 10 11 #################################################################### 12 [ ca ] 13 default_ca = CA_default # The default ca section 14 15 #################################################################### 16 [ CA_default ] 17 18 dir = ./CA # Where everything is kept 19 certs = $dir/certs # Where the issued certs are kept 20 crl_dir = $dir/crl # Where the issued crl are kept 21 database = $dir/index.txt # database index file. 22 #unique_subject = no # Set to 'no' to allow creation of 23 # several ctificates with same subject. 24 new_certs_dir = $dir/newcerts # default place for new certs. 25 26 certificate = $dir/cacert.pem # The CA certificate 27 serial = $dir/serial # The current serial number 28 crlnumber = $dir/crlnumber # the current crl number 29 # must be commented out to leave a V1 CRL 30 crl = $dir/crl.pem # The current CRL 31 private_key = $dir/private/cakey.pem # The private key 32 RANDFILE = $dir/private/.rand # private random number file 33 34 default_days = 365 # how long to certify for 35 default_crl_days = 30 # how long before next CRL 36 default_md = default # use public key default MD 37 preserve = no # keep passed DN ordering 38 39 # A few difference way of specifying how similar the request should look 40 # For type CA, the listed attributes must be the same, and the optional 41 # and supplied fields are just that :-) 42 policy = policy_match 43 44 # For the CA policy 45 [ policy_match ] 46 countryName = match 47 stateOrProvinceName = optional 48 organizationName = match 49 organizationalUnitName = optional 50 commonName = supplied 51 emailAddress = optional 52 53 #################################################################### 54 [ req ] 55 default_bits = 2048 56 default_md = sha256 57 default_keyfile = privkey.pem 58 distinguished_name = req_distinguished_name 59 attributes = req_attributes 60 61 # This sets a mask for permitted string types. There are several options. 62 # default: PrintableString, T61String, BMPString. 63 # pkix : PrintableString, BMPString (PKIX recommendation before 2004) 64 # utf8only: only UTF8Strings (PKIX recommendation after 2004). 65 # nombstr : PrintableString, T61String (no BMPStrings or UTF8Strings). 66 # MASK:XXXX a literal mask value. 67 # WARNING: ancient versions of Netscape crash on BMPStrings or UTF8Strings. 68 string_mask = pkix 69 70 [ req_distinguished_name ] 71 countryName = Country Name (2 letter code) 72 countryName_default = DE 73 countryName_min = 2 74 countryName_max = 2 75 76 stateOrProvinceName = State or Province Name (full name) 77 78 localityName = Locality Name (eg, city) 79 localityName_default = Muenchen 80 81 0.organizationName = Organization Name (eg, company) 82 0.organizationName_default = opitz-consulting 83 84 organizationalUnitName = Organizational Unit Name (eg, section) 85 organizationalUnitName_default = IT Department 86 87 commonName = Common Name (eg, your name or your server\'s hostname) 88 commonName_max = 64 89 90 emailAddress = Email Address 91 emailAddress_max = 64 92 93 [ req_attributes ] 94 challengePassword = A challenge password 95 challengePassword_min = 4 96 challengePassword_max = 20 97 98 unstructuredName = An optional company name 99 100 #################################################################### 101 [ root_ca_extensions ] 102 # This (basicConstraints = critical,CA:true) is what PKIX recommends but some broken software chokes on critical extensions. So we do this instead: 103 basicConstraints = CA:true 104 105 # Key usage: this is typical for a CA certificate. However since it will 106 # prevent it being used as an test self-signed certificate it is best 107 # left out by default. 108 keyUsage = keyCertSign, cRLSign 109 110 # PKIX recommendations harmless if included in all certificates. 111 subjectKeyIdentifier = hash 112 authorityKeyIdentifier = keyid:always,issuer 113 114 115 #################################################################### 116 [ client_ca_extensions ] 117 # This goes against PKIX guidelines but some CAs do it and some software 118 # requires this to avoid interpreting an end user certificate as a CA. 119 basicConstraints = CA:false 120 121 # This is typical in keyUsage for a client certificate. 122 keyUsage = keyEncipherment,nonRepudiation,digitalSignature,keyAgreement 123 124 # This is required for TSA certificates. 125 # extendedKeyUsage = critical,timeStamping 126 extendedKeyUsage = 1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2 127 128 # PKIX recommendations harmless if included in all certificates. 129 subjectKeyIdentifier =hash 130 authorityKeyIdentifier =keyid,issuer 131 132 133 #################################################################### 134 [ server_ca_extensions ] 135 # This goes against PKIX guidelines but some CAs do it and some software 136 # requires this to avoid interpreting an end user certificate as a CA. 137 basicConstraints = CA:false 138 139 # This is typical in keyUsage for a host certificate. 140 keyUsage = keyEncipherment,nonRepudiation,digitalSignature,keyAgreement 141 142 # This is required for TSA certificates. 143 # extendedKeyUsage = critical,timeStamping 144 extendedKeyUsage = 1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2 145 146 # PKIX recommendations harmless if included in all certificates. 147 subjectKeyIdentifier = hash 148 authorityKeyIdentifier = keyid,issuer 149 150

In the first section CA_default the location of different files is described. The line ‘default_ca = CA_default’ is an inclusion of the CA_default extension.

If unique_subject  = no is commented out, it is necessary to revoke every old certificate until a new one with the same subject can be generated.

The value default_days = 365 issues by default certificates with the validity of a year.

The extension policy_match defines how certificate requests to be signed by this CA must be. It is activated by the line ‘policy = policy_match’.

Next one is the extension req. It will be activated if we are creating a certificate signing request. We set the default rsa key length to 2048 bit and the default algorithm to SHA-2.

With req_distinguished_name we set up the defaults for the CN and the other attributes of the certificate.

After that I decided to create three different extensions for the three different use cases: root_ca_extensions, client_ca_extensions and server_ca_extensions. The used extension is selected during the signing of the csr.

In my first tries the client_ca_extensions and server_ca_extensions had different extendedKeyUsage settings: 1.3.6.1.5.5.7.3.2 for the client and 1.3.6.1.5.5.7.3.1 for the server. In my opinion this should be enough, but I later found some unexpected behaviors on one system perhaps triggered by this so I didn’t do further research and added both key usages to both certificates.

For the field keyUsage I found a good describing comment on stackexchange. To sum up: Needed usages depends on the cipher suite, that’s why it is recommend to add all 4.

Creating the CA key and self signing the CA certificate

First we generate the private key of the root CA and store it encrypted in a file:

1 openssl genrsa -out ${ROOT_CA_KEY} 4096 -des3

ROOT_CA_KEY is the encrypted private key file of the root certificate of the CA.

The second step is to create the certificate of the root CA:

1 openssl req -x509 -new -days 3650 -md sha256 -config /etc/pki/tls/openssl.cnf -extensions root_ca_extensions -key ${ROOT_CA_KEY} -out ${ROOT_CA_CRT} -passin pass:${ROOT_CA_PWD}

ROOT_CA_PWD is the password for the private key of the root certificate.

ROOT_CA_CRT is the root certificate of the CA.

Now our CA is ready to create certificates with associated private keys and signing certificate signing requests.

Creating a client certificate with the associated private key

I have implemented for all three use cases the same two steps, even if they could be combined or are not necessary in some cases:

  • Create a certificate signing request and a private key (file extensions csr and key)
  • Signing the certificate signing request and generating the certificate (file extension crt)

In many examples the private key file and the certificate file use the extension pem. I prefer the extensions key and crt to make it more clear, what is inside the file.

In this use case we create the csr and the key with the command:

1 openssl req -sha256 -days 365 -newkey rsa:2048 -nodes -keyout ${KEY} -out ${CSR} -subj "/C=DE/L=Muenchen/OU=IT Department/O=opitz-consulting/CN=${CERT_CN}/emailAddress=${CERT_EMAIL}"

KEY is the filename with path of the generated private key file

CSR is the filename with path of the generated certificate signing request file

CERT_CN is the identifier of the user/system using this client certificate. This could be e.g. Mister Someone or CRM.

CERT_EMAIL is the mail address of the user or using system.

The second step is to sign the csr and create the certificate:

1 openssl ca -keyfile ${ROOT_CA_KEY} -cert ${ROOT_CA_CRT} -config /etc/pki/tls/openssl.cnf -extensions client_ca_extensions -notext -batch -days 365 -md sha256 -in ${CSR} -out ${CRT} -passin pass:${ROOT_CA_PWD}

CRT is the filename with path of the generated certificate file

We use the client_ca_extensions for creating this certificate.

It is highly recommend not just to execute the two command on the shell. Instead the commands should be wrapped into shell script having only one or two parameters and defining the folders, filenames, pattern for naming and so on. Otherwise, especially if different employees are issuing certificates, the overview about the certificates will be lost very soon.

Creating a host certificate with the associated private key

In this use case we create the csr and the key with the command:

1 openssl req -sha256 -days 365 -newkey rsa:2048 -nodes -keyout ${KEY} -out ${CSR} -subj "/C=DE/L=Muenchen/OU=IT Department/O=opitz-consulting/CN=${CERT_CN}"

KEY is the filename with path of the generated private key file

CSR is the filename with path of the generated certificate signing request file

CERT_CN is the full qualified hostname of the machine.

The second step is to sign the csr and create the certificate:

1 openssl ca -keyfile ${ROOT_CA_KEY} -cert ${ROOT_CA_CRT} -config /etc/pki/tls/openssl.cnf -extensions server_ca_extensions -notext -batch -days 365 -md sha256 -in ${CSR} -out ${CRT} -passin pass:${ROOT_CA_PWD}

CRT is the filename with path of the generated certificate file

We use the server_ca_extensions for creating this certificate.

Signing a host certificate request

In this use case the csr is created on an other system and provided for signing. Also the private key file associated to the certificate signing request stays on the other system and is not needed for the signing process. Only the second step is to sign the csr and create the certificate is excuted:

1 openssl ca -keyfile ${ROOT_CA_KEY} -cert ${ROOT_CA_CRT} -config /etc/pki/tls/openssl.cnf -extensions server_ca_extensions -notext -batch -days 365 -md sha256 -in ${CSR} -out ${CRT} -passin pass:${ROOT_CA_PWD}

CSR is the filename with path of the certificate signing request file provided

CRT is the filename with path of the generated certificate file

We use the server_ca_extensions for creating this certificate.

Bernhard Mähr @ OPITZ-CONSULTING published at http://thecattlecrew.wordpress.com/

Veröffentlicht unter SOA | Kommentar hinterlassen

Close the containing parent popup component without binding using an actionEvent

When using popups in ADF we suffered the problem to close the popup after finishing the activity.
So this is the request: We have a table providing data with an info facet holding buttons for adding, editing and deleting a selected row. While the add and edit button open a popup with a form the delete button opens a dialog to confirm the delete action. This is realized with a showPopupBeavior. Of course there is a cancel button in all of these popups closing it immediately and rolling back. popup

So this is an example what a popup might look like.

 

As the cancel button immediately rollbacks, the save button immediately commits the made changes.

delete

This is the delete popup

 

 

This cancel button is just out of the box, closing the dialog and not rolling back because there is nothing to rollback. But delete executes this action, commits and closes the dialog.

How is this implemented?

Every button has an actionListener property which we will use. After completing out task in the code before we will call a closePopup method.

public void closePopup(ActionEvent actionEvent) {

UIComponent tmpComponent;

tmpComponent = actionEvent.getComponent().getParent();

while (!(tmpComponent instanceof RichPopup)) {

tmpComponent = tmpComponent.getParent();

}

RichPopup popup = (RichPopup) tmpComponent;

popup.hide();

}

So we are going the parent of the calling component until we hit an UIComponent, which is an instance of a RichPopup we can close.

What’s the advantage of this?

First of all this code is reusable for every situation where we want to close the parent popup.

Secondly it’s irrespective of binding the component to the bean.

Even though we did not experience problems with this technique closing the popup, we are not aware of following problems. If you know some issues that may appear, please let us know in the comments!

Veröffentlicht unter English, Java, JDeveloper, Oracle ADF, Oracle FMW, Software, Weblogic Server | Verschlagwortet mit , , , , | Kommentar hinterlassen

Testing-Concepts and Considerations in a SOA landscape that uses the Oracle Service Bus Framework

Introduction

Due to the ever growing complexity and multiple dependencies of software components in an enterprise landscape, “Automated Testing” is no longer an optional feature of the software development process but a crucial ingredient of the development process and plays a significant role in every successfully launched software project.

Software Development in a SOA environment has its own challenges that differ from, let’s say, purely Java components. One reason is the fact that software testing needs invariant laboratory conditions; but exactly this is hard to achieve in a complex service landscape. So when designing tests it is very important to determine isolation conditions to meet these requirements. Another point worth keeping in mind is the fact that frameworks -like Oracle Fusion Middleware- follow a declarative development approach; this means that testing is possible only after the deployment and never before (because the services are de-facto built up during the deployment). Some artifacts can be tested before the deployment but we get to it later.

In this article focusing on Oracle Service Bus we outline a feasible testing strategy that allows us to implement automated testing of arbitrary complexity; starting from simple service-testing up to end-to-end tests that involve a whole service chain spread over one or more domains.

Test Categories

We assume that the reader is familiar with service classification concepts like “elementary services” or “composite services”. We also assume that the basic concepts of OSB like XQueries, Proxy and business services are well known concepts as well.

Let us focus for the moment on transformation logic. The tools offered by OSB are XQueries and XSLT transformations. These files describe, roughly speaking, how a certain XML structure should be transformed to another one, or how a piece of information can be extracted from it. Hence they can be regarded as functions that receive an XML structure as input and provide another XML structure or simple data as an output. These artifacts can be tested by java Junit means before deployment. One might consider of it as an unnecessary testing step; but it is enormous important, because it guarantees us  that no side effects would remain undetected in case e.g. a namespace or a structural modification in a XML schema has to  be carried out.

So keeping these points in mind, one can make the following distinction:

Pre-deployment Tests

These are tests mainly using Junit techniques and may be considered as “low-level” or basic tests. They should guarantee that basic transformation logic of data structures meets the requirements. Testing the XQuery components help us ensure the XQuery language correctness as well and help us avoid namespace inconsistency and confusion.

Post-deployment Tests

These tests can be performed after the deployment of the OSB artifacts has been accomplished and can be distinguished in two main categories.

  1. Tests that affect a single service. They usually consist of a single call to a service provided by the OSB server and the analysis of the response.
  2. End-to-end Tests; these are much more complex and require mock-services in order to simulate a whole service landscape and perform Lab conditions.

Testing-Pyramid

Figure 1 Testing Pyramid

Let us focus now in the testing processes themselves.

Simple Service Tests

These tests guarantee that a given a certain input the service responses in the expected way.

simple-servie-tests

Figure 2 Simple Service Tests

Complex End-To-End Tests

These tests define in general a dedicated entry point and a dedicated exit point where the assertions have to be performed. Everything in the middle can be considered as a “black box”.

Further on, in order to guarantee the so called laboratory conditions i.e. avoid side effects the assertion point such as one or more services in the middle has to be replaced by mock services.

complex-servie-tests

Figure 3 Complex End-To-End Tests

Implementation schema

In the following we briefly outline the main implementation schema.

The Java test-process can be separated in the following steps:

  1. Via JMX the java Test application connects to the OSB server and manipulates the end point of a certain business server.
  2. The client-side establishes a Java mock-service having all the required characteristics of the OSB-service that has to be mocked.
  3. The test is initiated i.e. the test-request is send
  4. The service established by Step2 is waiting for a request from the OSB service that has been replaced by a mockup in Step1.
  5. All the designed assertions are made
  6. All changes made in OSB Artifacts are rollbacked.

end-to-end-tests.png

Figure 4 End-To-End Tests

The whole testing process can be automatized using a deployment server e.g. Jenkins. In order to enable the system to perform the tests both deployment server and weblogic-managed-servers (OSB Server or cluster) has to be accessible to each other.

deployment.png

Conclusion

The testing framework might need quite a lot of implementation efforts in order to cover the entire requirement that arise from a complex SOA environment. However we emphasize that all the implementation efforts will be well rewarded in terms of quality and stability.

Veröffentlicht unter Integration, Oracle Service Bus, SOA | Kommentar hinterlassen

Using certificates for authentication at M2M communication

Most bigger companies are today building up a enterprise SOA. On of the key characteristic of the enterprise SOA is the machine to machine communication.

The communication between the machines has to be secured. Important is

1.) to keep the the content of the sent messages private

2.) ensure only the authorized machine can invoke operations

The first point can be achieved using HTTPS communication for webservice calls.

Username and Password over HTTPS

The second point is at user to machine communication often realized using a username and a password combination and can also be used for machine to machine communication.

BLOG_Certificates_00

On the client side (host1) the certificate of the certificate authority is needed to establish a trust to the CA.

On the host side there is the host certificate of machine host2 with the associated private key. This certificate is used to make sure machine host1 is really talking to machine host2. The attribute CN inside the certificate must match to the full qualified hostname of machine host2.

The username and password is transmitted over the encrypted HTTPS communication. The host verifiers user and password and has an association to the account used for the operation and the rights the caller has.

Most IT professionals know this type of communication and authentication. But there is an other well supported option: client certificates.

Client certificates

In this blog post I will describe the basics how a m2m communication with client certificates work.

In this picture is the call of a webservice using HTTPS and client certificate described:

BLOG_Certificates_01

On the client side (host1) the client certificate of the system e.g. CRM with the associated private key is needed. This certificate is used to identify the (logical) system to the machine host2. If the system is a cluster the certificate is the same for all machines.

On the host side there is the host certificate of machine host2 with the associated private key. This certificate is used to make sure machine host1 is really talking to machine host2. The attribute CN inside the certificate must match to the full qualified hostname of machine host2.

Additionally we need the client certificate of the system (e.g CRM) without the private key on machine host2. Inside machine host2 there is a association from the client certificate of the system CRM to an account used for the operation and the rights the caller has. How this association is created depends on the application software on machine host2.

On both sides we need the certificate of our CA. This certificate is need to verify, that all other certificates are issued by the trusted CA and not by a third party. Inside the enterprise it is a best practice to trust on the predefined list of trusted CAs and not on the list automatically provided by Java, SAP or an other manufacturer. Instead a custom list containing only the own trusted CA should be used.

Bidirectional communication with client certificates

If the communication can be initiated from both machines or we are using asynchronous webservices we need to double the number of certificates:

BLOG_Certificates_02

On both machines we have a (different) client certificate identifying the system with the associated private key for proofing the identity of the machine.

On both machines we have a (different) host certificate with the associated private key matching the full qualified hostname of the machine. Additionally the client certificate of the other system is need for the association with the account used for processing the requests.

Security considerations

– Keep the private key of the client certificate secured. If someone has this private key he can easily do everything the machine is able to do.

– Don’t use wildcard certificates for host certificates because a wildcard certificate can act as every machine of the enterprise and there for with e.g. spoofing of the DNS listen to any communication.

Bernhard Mähr @ OPITZ-CONSULTING published at http://thecattlecrew.wordpress.com/

Veröffentlicht unter SOA | Kommentar hinterlassen