First look at JRules 7

This entry is part 1 of 5 in the series First Looks at JRules 7

Well it’s been a while since I posted but I can blame being busy in the non-blogging side of life, time off, and preparing for the conferences in the fall. I’m not done with that last point, but I think it was time for me to have a look at JRules 7 which came out a couple of weeks ago.

This will be the first post in a series that covers my experience with trying out JRules 7. The topics will include simple differences, Migration process, Decision Validation Service and Rule Solutions for Office.

My first impressions: I like it.

The transition to the IBM world is almost seemless. Some of the new components have interesting capabilities, overall a good upgrade.

Differences when compared to previous version

The install of different components is separate as opposed to the single install previous versions had.

  • Rule Studio and Execution Server
  • Decision Validation Service (DVS) (replaces Rule Scenario manager or RSM)
  • Rule Team Server (RTS)
  • Rule Solutions for Office

The default web server that comes with the install is now Apache-Tomcat (so long JBoss!). I had my reservations initially, but everything is working just fine so far. Actually, I don’t exactly know what IBM/ILOG changed, but the starup time of the web server is a LOT faster in Tomcat than it was in JBoss. In my case the load time of the web server went from 1m48s to 21s, that’s basically 5 times faster.

The JVM that comes with the install is probably the “IBM Development Package for Eclipse”. I say probably because the IBM J2SE envrionment is not available for Windows. (J2RE 1.6.0) So it looks liek they did away with the usual Sun JVM. I suppose that is normal since it is now an IBM product.

Rule Studio leverages Eclipse 3.3.1, and now comes with an xml editor that allows easy edition of xml, xsd files. In previous versions, you had to rely on a third party plugin for Eclipse.

Installation

The install process is pretty straight forward. I personnaly had some issues when launching the ruleflow editor, but it turned out that the cause wasn’t Rule Studio at all, I needed to upgrade my graphics card drivers. No it wasn’t straight forward to figure that one out. Yes I am glad it is over… 🙂

So far, so good. In my next post, I will discuss my experience migrating a project from JRules 6.7.2 to JRules 7.0.

First look at JRules 7.0 – Migration of a project

This entry is part 2 of 5 in the series First Looks at JRules 7

I tried a couple of project migrations. Once smaller and simpler, and one that was much larger, and had already gone through 2 prior upgrades of JRules.

To start off, the documentation to prepare and guides users through a migrations is decent. IBM/ILOG provides different typical scenarios of migration which depend on you use of previous versions of JRules and the tools it provided. For the migrations I tried, I was in the simpler migration situation, which is users that use Rule Studio and a source code control to manage the code and rules and who possibly publish to a Rule Execution Server.

The other scenarios, which require updating Rule Team Server, and Rule Scenario Manager are obviously more complicated and people having to go through these migrations may have more work to do than I did.

The migration process itself is a silent migration. There are no messages that come up to tell you the progress or anything to that effect. There is one manual process they encourage you to go through to make sure the migration is completed and that is to open decision tables one by one, introduce a fake change and then save to be converted to the new format. In previous versions of JRules it was possible to do a search of all decision tables using a wildcard, but not in this version for some reason. Luckily, you can work around that by doing a file search of all files ending in “DTA” which is the extension used for decision tables. That way you can double click the results, make the “fake change” and save the file to transform to the new format.

Migration of the simple project

Simple XOM, Simple BOM, Simple Rules results in a simple Migration.

After importing each project in Rule Studio, I was left with only a single warning. There was an empty Rule Task in a Ruleflow. It turns out that although you could use empty rule tasks or empty function tasks to rejoin some of the flows in your ruleflow, that using empty rule tasks now gives a warning. That is an easy fix by simply replacing the empty rule task with and empty function task.

The other piece of the “migration” was that I had to go through was to re-generate the “Client Project for RuleApp” Web Service and to re-modify the code based on the code form 6.7.2. I chose to re-generate and re-modify the code for a couple of reasons:

  • I wanted the latest and greatest code generation (turns out not much is really different
  • I wanted to get the build.xml files for deployment onto the Tomcat platform as opposed to the JBoss platform

Some of the code I was using in the ExecutionHook.java class I had to re-write based on the refactoring of the API that was performed in JRules 7.0. Nothing major, but it needed rewriting.

That was it! Well not quite. Now on to the complex project migration.

Migration of the complex project

For the complex project, the migration started fairly straight forward. I encountered some warnings in the rules project after the import completed.

It turns out that the Ruleflow editor is more strict than previous ruleflow editors and therefore gives warnings that were not present in the 6.7 version of the editor. In my case it had to do with some empty rule tasks. I mentionned this issue above, but this was different.

The empty rule tasks were actually not visible in the editor itself. It seems that in the years of editiong the ruleflows, possibly copy and pastes, and migrations, some “ghost” rule tasks were left around in the ruleflow, but were not visible in the editor. The only way to remove them was to go into the XML and remove the XML for that task. As a note, I don’t think this situation would be encountered normally, but the history of the project I experimented with had some of these behaviours.
Note: Before doing that, always make sure you have backup up that file because if you screw up your edition you may be losing your ruleflow altogether.

Besides these “ghost” tasks that you need to edit manually out of the Rulflow XML, the migration process is pretty easy and straight forward.

Side notes: The ruleflow editor

I find that the ruleflow editor interface has been improved quite a bit. It is nicer looking than it used to be and the editors for initial action, final action and properties on rule tasks are much better to work with since they are not as limited in size as they were before. The ruleflow itself loses the initial action and final action, but these are moved to the more logical start task and end task properties respectively. Overall a much better interface than in previous versions.

In my next post I will discuss the Decision Validation Service.

First look at JRules 7.0 – Decision Validation Service

This entry is part 3 of 5 in the series First Looks at JRules 7

JRules 7 introduces the Decision Validation Server (DVS) which replaces the Rule Scenario Manager (RSM) of previous versions.

The main purpose of DVS is to provide a way to test the rules and validate results against expected results. It also allows for simulations which allow comparison of the results of the execution of two rulesets.

As a note for RSM users, there is a way to migrate and RSM archive to a scenario file that is compatible with DVS. I haven’t tried this process, but it is available.

Testing

DVS uses Excel files to feed input data and expected results through the rules. A user can generate an Excel template to fill in with input data and expected results, then run the tests and get a report on tests passing or failing. This approach can work well in some environments, as long as you business users don’t mind entering the test data in the Excel template.

One thing is unclear to me and that is the evolution of this Excel file if your BOM evolves. For example, if you add a new field, is it possible to add this new field to the Excel file without regenerating and without breaking things? In any case, I’m pretty sure that with some Excel gymnastics it would be possible to copy and paste contents so re-entering everything is not required but the documentation is not clear on what steps would be required if you do have a filled in Excel template.

Should you wish to go that route, there is also an API that can be used to populate the Excel template There is also a tool that is supposed to help auto-populate the Excel file with Data. The sample code would have to be customized to fit the needs of other projects.

I tried the DVS for testing on the samples that ILOG provides and things obviously work relatively well. Generate a template based on a BOM, fill in the template with input data and expected results, run the test and voilĂ , a nice report indicating which tests passed or failed.

The main problem I have with using DVS is the limitations it has on generating the expected results template structure. If your output is not a “flat” structure, you may be in for a lot of fun. If your main output entity contains other entities, the scenario might not get generated properly if the number of child entities is not bounded. In the real life projects I played with, the output has multiple levels and has some unbounded elements and the wizard can’t generate the template properly because it attempts to “flatten” it. There might be a workaround by using virtual attributes (which basically means that you end up flattening the structure manually), but I was not going to start doing that for 180 attributes.

For those interested, I am reproducing the limitations texts here:

From the main readme.html located in the install directory:

Limitations Comments and workarounds
Cannot test the content in collections of complex type in Excel scenario file templates. The column headings of the Excel scenario file template are defined by the
attributes of the BOM class. Each column corresponds to an attribute.
Each attribute for which a test is defined in the Expected Results sheet:

  • must be verbalized, since the name used in the column heading is the
    verbalization of the attribute
  • must not be a collection of complex type

If the attribute is a collection of simple type (int, String, and so on), the domain
must be defined.

And from the documentation

WebSphere ILOG JRules V7.0 > Rule Studio online help > Testing, validating, and auditing rules > Tasks > Adding or removing columns from the Excel file

Note

Attributes of complex type, such as collections of complex objects, or maps are not available for selection in the Expected Results page of the Generate Excel Scenario File Template wizard. If you want to use these attributes as expected results, you must create virtual attributes of simple type.

Besides this limitation, DVS is very promising.

Simulations

Simulations allow you to leverage the DVS but in this case to perform “What if” analysis of the rules. In other words, perform an analysis of how a new ruleset will behave usually based on some historical data. You would identify and create KPIs that will perform the measurements you are looking for and the results would be made available to you after running the simulation.

I haven’t tried the simulations at this point, if I get a chance I will in the future and report my findings here.

Decision Warehouse

The Decision Warehouse can be used to troubleshoot or to have a look at the execution trace of a ruleset. After enabling some properties on the ruleset and deploying it the RES, you are able to have a look at the trace of the execution of the rules.

The Execution details include the “Decision Trace” which basically shows you which rules fired and in what order, the Input and Output parameters which allows you to see the details of what went in and out of the rule execution.

I am really interested in this tool because I think it may allow business users to see which rules were fired and some of the technical details without tracing the execution of the rules step by step from Rule Studio. It can help identify the source of the problem, although the real troubleshooting might still have to be done in Rule Studio.

Each decision is now tagged with an execution ID or decision ID which can be sued to locate the decision in the decision warehouse. Once you are done troubleshooting, you can turn off the properties so that the execution stop gathering all of that information.

Enabling the trace on the execution obviously slows the performance, but it can facilitate some of the troubleshooting that sometimes needs to take place. A nice addition to the troubleshooting tools.

First looks at JRules 7.0 – Rule Solutions for Office

This entry is part 4 of 5 in the series First Looks at JRules 7

Rule Solutions for Office is the newest addition to the JRules family. It allows business users to edit rules in Word (for Action Rules) or Excel (for Decision Tables). Rule Solutions for Office requires MS Office 2007 which is definitely not available in all business environments (companies are sometimes slow to upgrade these suites).

That said, I think the actual product is extremely useful and a lot of companies should be able to use it to make rule editing easier.

The premise here is that from a teamserver rules repository, it is possible to export the contents of the rules to a Word document for regular business rules and to an Excel spreadsheet for decision tables. These Word and Excel documents then allow a user to edit the rules “offline” and then, once they are done, upload the changes back to teamserver.

For exporting the rules from teamserver to the Office documents, it is possible to use queries and some other settings to filter some rules out or to break the documents into pieces by package. Before the actual export takes place, you are able to verify that the results will be as you expect.

This export does not seem to be able to allow a user to save files to his or her drive, but exports to a location that the server has access to (possibly a shared drive). To update the changes in teamserver, those files in the specified location need to be updated. When I think about it, it makes sense because you probably want to control who downloads what and especially who re-uploads what and doing things that way should give control to the Rules Administrator.

In both plug-ins it is possible to have a view of the vocabulary that is available for writing the rules.

Conflict resolution

In my tests, I experimented with making 2 different changes to a rule in the Office plugins as well as the same rules on the team server. When trying to upload the rules back into teamserver, the process actually detects the conflict and indicates that your teamserver rules might be overridden. It then lists the rule in conflict, and allows you to choose between overriding the rule or not. The default is to not override the teamserver rules. (Side note: Internet Explorer 8 does not behave very well as a browser with teamserver, so I switched to Firefox half way through my tests).

Excel Plug-in

The Excel Plug in allows edition of decision tables. I was happy to see that it includes some of the basic validations you would expect from a decision table valuator such as flagging overlaps in rows or checking for gaps. It makes edition of the rules very easy. It is also possible to see the preconditions on the table if any are present as well as viewing the rule statement that the row is equivalent to just to make reviewing a little easier.

It is possible to add condition and action columns in existing decision tables the same as you would be in Rule Studio or teamserver. What I was impressed with as well is that you can also add completely new decision tables by adding a new worksheet in your workbook. Very Cool. Deletion of decision tables is also possible simply by deleting the worksheet.

Word Plug-in

The Word plugin also allows a user to edit the rules, add new rules or delete rules. The editor for the rules is pretty solid, it prompts the user with choices in a behavior that is very similar to the behavior on teamserver.

The final word

Rules Solutions for Office is a set of extremely powerful plug-ins that business users can easily leverage for writing rules off-line. As long as the users has been exposed to rule edition before (in teamserver for example) they should be able to easily figure out what to do within the Word and Excel environment. Training will be required if they have not edited rules before.

In short, very cool amd functional tools for business users.

First look at JRules Scorecard Modeler

This entry is part 5 of 5 in the series First Looks at JRules 7

I recently have been able to use the new add-on to JRules called Scorecard Modeler.

Unlike the name may imply, Scorecard Modeler does not help you model scorecards at all. In other words, it is not a statistical analysis tool that would be used to create scorecard models, but simply an editor that allows easy creation of scorecards within the JRules environment.

What is a scorecard? I have recently written a post title “A quick introduction to scorecards” to go over some of the details of what a scorecard is. I suggest you start there before continuing reading this post.

Scorecard modeler is simply a new editor that will be added to your list of choices for adding rules to a rule project and in some ways it looks similar to a decision table editor.

It is a Rule Studio only editor so as of this writing, there is no RTS editor available. How does it work? It simply generates the usual IRL rules (like all other editors) for deployment. I suspect that IBM will be working on an RTS editor for this, but that is only speculation at this time.

The Review

Installation

The installation of the scorecard modeler requires that you should have JRules already installed (verson 6.7.x, 7.0 or 7.1). The installation will ask you to select the directory of your existing JRules install, but otherwise it is pretty straight forward.

The Editor

Once you have installed the modeler, when you relaunch your Rule Studio, the new editor will be available to create scorecards in JRules. Some of the options will change the number of columns that need to be filled in for the scorecard but overall it is simple to understand how to fill the information in.

There are some GUI quirks in the current interface (entering negative values is quirky, moving attributes up and down, and making the scorecard properties page appear for example) but these do not prevent the scorecard from working properly and I am hoping that IBM will eventually address these minor irritants.

Some odd behaviours

Scorecard modeler has some behaviours that are not typical of the other editors in JRules. Namely, the modeler automatically creates new BOM entries in your project and it creates a variable set that is used to create variables that point to each scorecard created. Now these are odd behaviours, but are obviously required for things to work properly. Once you are familiar with those behaviours it makes troubleshooting possibly a bit easier.

These behaviours are even more prominent if you are using an architecture that uses multiple projects to hold XOM, BOM and Rules and especially if you have multiple BOM projects. The location where scorecard modeler chooses to set the variable set up and add the BOMs might not be the one you want or expect.

Another odd behaviour that is not common to the other editors is that the scorecard modeler actually allows the selection of “non verbalized” elements from the BOM. I am not sure why that is and it is contrary to the normal behaviour of all other editors in JRules so it is good to note that a user is actually able to choose from items that are not usually available as choices (until they are verbalized).

The limitations

There are limitations that are good to know about upfront.

Naming of a scorecard has more limitations than any other “rule” in JRules. Don’t use spaces, dashes or underscores in your scorecard name. I believe that this limitation is due to the automatic creation of variables (see above) and that as long as you stick to those “naming rules” you will be doing just fine.

Renaming of a scorecard is a little more work than what we might be used to. You can easily rename the scorecard, but the automatically created variable that goes with this variable is not renamed automatically. In some cases the old variable name gets deleted, but the new one does not get created. This can easily be fixed by manually creating the variable name that matches your scorecard name in the proper variable set and everything will be OK.

Documentation generated by the rule reports will not include a copy of the scorecard and there is no copy and paste to and from the scorecard modeler. So documentation will have to be created by using the age old technique of screen capture and pasting images in documents… Not ideal. I am hoping that IBM addresses this in future releases.

There are other limitations and all users should get familiar with them by reading the readme file that comes with scorecard modeler.

The issues

As of version 7.0.2 there are some issues that I have encountered in a specific environment for which IBM had to provide hot fixes for making the scorecard modeler work in that environment. Version 7.1 fixes most of these issues save 1  as far as I can tell (some attributes with a specific configuration can’t be selected at all to be part of the scorecard).

In IBM’s defence, IBM support has been extremely responsive to help resolve any of the important issues  encountered and the hot fixes provided make it possible to use the scorecard modeler.

The Documentation

The documentation provided with scorecard modeler is more of a relatively straight forward tutorial than actual documentation. Some of the features are not documented at all or so badly documented that even IBM has a hard time figuring out what they are for…

My suggestion on this is that new users should walk through the tutorials provided and then immediately try to make their own scorecards work in the modeler.

One of the good things from the documentation point of view is that at least the API documentation is included so that if customizations are needed for reasoning strategies and things of that sort, developers will have a starting point for working on the customizations.

The Verdict

The scorecard modeler in some cases still feels like an “early version” of the software, but with IBM’s help (through support and hot fixes) the scorecard modeler ends up being a great tool to enter scorecards into JRules.

Although a bit quirky, overall I think this tool has great potential to be a great addition to JRules.