Intelligent BPM maps

14 01 2010

I have posted about BPM maps hindering flexibility and capabilities to some extent before (regarding systems integration). See:

https://andrewonedegree.wordpress.com/2009/11/30/bpm-mapping-tools-integrating-data/

https://andrewonedegree.wordpress.com/2010/01/08/incorporating-automation-into-your-processes/

However, in this post I want to take this further by looking at how BPM maps (I use the term map loosely here) can become intelligent and hold much more than just business process routing rules…

The role of the map

For many this is the “definition” if you like of a business process, shown in a graphical map format. This is great, and it’s true to some extent. However, I believe the primary role of a workflow system is to deliver systems integration, not a predefined diagram of a process.  BPM and workflow only works well when it brings together systems, people and data to maximise the efficiency of a business requirement (or process if you like).

So what is the role of the map? Well it is there to provide business rules for a cross section of applications to deliver a solution that allows users to do their work effectively. (Not easy to read that sentence). This work is shown as a process. For me, I prefer to see processes graphically, but not in my BPM system, well not used to define the rules etc within my BPM solution. Graphical representations are great for identifying the business requirement, and should be done by a BA. But process maps should be used as a “specification” if you like for a developer to build my intelligent process map…

Using a developer to implement my business rules

I know that many of us want to have a nice mapping tool that allows a business analyst (BA) to create and modify maps / business processes. However, in the real world, this means you have a couple of restrictions / issues.

  1. You can’t easily integrate with other LOBs and data required for a particular step
  2. You can be limited to other business rules / factors (that are outside the scope of your map)
  3. Automated steps often require “Robot” type step applications to be written (specifically for your requirement)
  4. Much more emphasis is placed on developers for the actual implementation / front end of much of the system (if you require intelligent integration / more complicated system integration)

As mapping tools get more powerful you still have these issues, mainly because a BA is just that, not a technical person who wants or should be bogged down in the technicalities / functions / calculations etc required for the business.

By using a developer to take your map and build business rules into a BPM system (if your BPM architecture allows this type of process definition), you open up a world of systems integration and flexibility. Effectively your business rules / map can now become intelligent.

Intelligent maps

An intelligent map is more than just business processing rules. It contains actual business processing logic, it has the capabilities to bring in data from third party software, carry out complex calculations and functions, raises events and triggers and does all of this within the map itself.

 Most BPM maps cannot provide this level of integration or capabilities to execute / carryout processing functions. Many times these types of functions are provided in the form of “Robot” applications or step processors. These are background applications or services written by developers to include business rules and functionality into the process map, because the map itself cannot support this level of intelligence. The outcome is a solution that requires much more processing power, requires greater input from developers and one that is harder and more costly to maintain.

By shifting emphasis of functions and rules to an intelligent map, you provide a BPM solution that delivers greater out of the box functionality, keeps initial costs far lower and requires less development work / bespoke step processors to be written. In addition, when your business needs to adapt and change, updating processes are far easier and quicker. Since the map itself contains the business rules of your processes (as well as the definition of that process), you need only modify one thing, your intelligent map. There are no background processors that need modification, no new application changes to be made etc. Because the business intelligence is all stored in a singular place…

Quick example…

A good example of a BPM platform that works in this way is workFile BPM. It has been architected to ensure the “map” holds all the business rules as well as having the capabilities to integrate with other LOBs and execute functions, triggers etc within the map. Developers have to build the map in this case, based on information provided by BA’s.  

The out of the box user interface is in most cases the only interface you need, simply because of the intelligence available at the map level. However, there will always be occasions when “bespoke” processors are required, and the workFile BPM platform provides a complete XML Web Service API in which developers can build on the intelligence provided in their maps within workFile BPM…

Conclusion: System integrator or process definer…

I see the main aim of BPM and workflow to raise the efficiency of businesses by making it easier for users, and the business, to complete work. Defining processes allows us to visualise this work, however, the BPM platform brings together everything that is required to complete the work. So a BPM platform should be a systems integrator first and foremost, this is the real beauty of BPM and workflow…





Incorporating automation into your processes

8 01 2010

This may seem quite simple, however, it is often something that is neglected somewhat. When designing your processes, ask yourself just what can be automated, and how much automation will add to the efficiency of that business process.

Identification

It can be tricky to identify everything that can be automated straight away. Only after a good analysis phase will the majority of processes / tasks that can be automated, be identified. Some of these will be obvious candidates for automation; typically these are calculations or actions that can be completed with all the information stored within the BPM system. Other automation candidates may not be so obvious. The less obvious processes / tasks are often overlooked because of the way the process is currently worked, typically requiring integration between systems (maybe even multiple LOB applications). It is always important to try and automate as much as possible, or at least indentify everything that in an ideal world, could be automated…

Restrictions

Once you have identified all your possible automated “steps”, you really need to see what you can realistically automate given your current BPM technology, LOB application integrations capabilities and of course, your budget…

One of the big problems with BPM modelling tools is that they can become very restrictive in what can be achieved with regards to integration. This is something I have blogged about in the past, https://andrewonedegree.wordpress.com/2009/11/30/bpm-mapping-tools-integrating-data/ with many automated “steps” you will require the services of a developer, hopefully your chosen BPM platform will support this kind of integration and processing…

The next hurdle is to identify what integration capabilities your other LOB applications provide. If you cannot integrate with them at all, then your step cannot be automated and will have to rely on some good old fashioned user processing power, not so efficient. Having a good IT department or use of a good IT consultancy typically means that your company will have a good and clear understanding of its IT and have some form of strategy / roadmap in place. If so, you will probably find that your LOB applications (unless very old) will provide some form of API allowing integration possibilities. (Ideally your business will have proffered technology platforms, such as .NET, Windows etc). If this is the case, then you can start to investigate just how much integration is possible and evaluate the costs involved in automating your process step…

The benefits

Automated process steps provide a number of benefits, the main two of which are:

  1. Efficiency
  2. Accuracy

There are obvious efficiency gains by automating a step, which obviously raises the efficiency of your process and improves SLAs etc. However, accuracy is often overlooked. Automated steps are far more accurate (once they have been fully tested) as they simply remove human error from that particular step. Now I am not saying this means your process will not have “issues”, but what I am saying is that an automated step removes user error from that particular part of your process, something that can be very time consuming.

With these two main benefits you also get a great “cost” benefit. If you measure your time and resources and place a monetary value on these, you will soon see a clear ROI timeline for automating a particular process step. This will typically be the deciding factor (if possible) in choosing to automate a particular step or not…

Conclusion

Whenever you review your business processes, even if you don’t have a BPM system in place, always ask yourself which processes or “steps / tasks” could be automated. Don’t feel restricted because a process spans multiple systems, departments or geographical spaces just indentify candidates for automation. A good way of doing this is by using a good independent consultant.

Automation is a great way of raising efficiency, accuracy, productivity and reducing operational costs. It there is always in the benefits of a company to automate as much as possible…





ECM access on my phone?

7 01 2010

There is a lot being made of ECM and the ways in which users interact with content stored in an ECM repository. There is a real belief that more of us will choose to access ECM content via a multitude of devices, the most obvious being my mobile phone.

With smart phones, such as the iPhone, Windows 6.5 mobiles and now the Google’s Nexus, the real question I find myself asking is “will I really want to access content on my phone?” For many the answer will be “NO”, and for many others the answer will be a very loud “YES”. So what are the real benefits and issues, without getting bogged down in technical jargon…?

ECM on my phone…

Most of us like to be as flexible as possible when it comes to doing work. By this I mean, if I am on the train, instead of wasting my time (maybe sleeping?)  I can get on with some work. With your phone you can check and send some emails, respond to meeting requests etc and in many cases get quite a bit of work done before you are even in the office. The same flexibility is required when we may not be in the office for a while. Obviously my device of choice will be a laptop; however, the flexibility to be without my laptop and use my phone is something that will appeal to many of us… Because of this, being able to connect and work in a “flexible” fashion is very important to individuals and businesses as a whole.

Will my phone interact with our ECM solution?

Basically “Yes”. Most phones these days now come with a web browser (all smart phones do), and if your ECM solution can provide a browser based front end, then interacting with your ECM system isn’t technically very hard. The issue you may well face is using the device itself to navigate around the web pages and download / view the content you want. For me, this is a basic way of allowing content to be shown on a mobile phone. Most of the issues faced then are based around the device itself and what you can realistically achieve on it…

Do I have to use a browser on my phone?

Again the answer is “No”. Using a browser gives us the simplest way of interacting with content on our ECM system; it’s also probably one of the cheapest. However it isn’t the best solution for such a small device, it does make certain features “fiddly” to use, think;

a)      Searching

b)      Checking in / out a file (if you would do such a thing)

c)       Reviewing properties

d)      Reviewing an audit log / history

e)      Tracking in a Case Management / BPM system

This is because you will need to use a lot of clicks and zooming in and out using the browser etc.

The best solution is to provide mobile based applications that can interact with your ECM solution.

ECM mobile applications

If we realistically want to work and interact with our ECM platform, and for that matter, Case Management / BPM solutions, then mobile based applications is the way forward. With the power of smart phones ever increasing, having dedicated applications on your mobile phone isn’t a problem. With mobile applications comes greater flexibility as each application will be specifically designed to be accessed via devices with limited real-estate in terms of space on the screen. This makes using the applications far simpler and easier, which means we are ultimately more likely to want to access our ECM systems via our mobile device.

As we start 2010 it is obvious that ECM solutions need to provide many more ways for users to interact with them. This doesn’t mean a generic web environment / interface, rather a multitude of applications and interfaces that are dedicated to interact with your repository from a particular device.  The trick for providers is providing a single “architecture” for access, which serves all of the different applications that may interact with your ECM repository…





Integration is Key (ECM / BPM / Social media)

11 11 2009

For many years I have waved the banner for single application experiences for end users. If you can deliver a single application that allows the end user to carry out all their work, gain access to all the files they require, interact with many other LOB applications (without knowing it), just think what a positive impact that would have on any organisation. Think how better informed that user will be, how much improved their decision making will be, how much customer services will be improved along with customer satisfaction, and also, think how much of a gain that organisation will make in efficiency, productivity and ultimately profitability…

Integration has long been the key to this ideal, and ECM and BPM often show how this can work, integrating with key LOB applications.

Problems…

The problem is that people want everything to integrate without putting any effort in. This means that organisations spend a lot of money in getting applications to integrate with other companies applications and software. While this can be great for the customer (if you have the same selection of applications and software) it isn’t always practicle. Throw into the mix different operating systems, different versions of software and the daddy of all, different business requirements from that integration….All of a sudden you see how muddy the water can get and just how complicated system integrations can be, and why that single application experience is so hard to achieve…

Progress

With the bright invention of XML has come a whole host of ways of integrating applications. It has provided the bridge between old COM and COBRA components, interopability between application components, and most importantly, delivered us XML Web Services and Service Orientated Architectures (SOA).

I love XML Web Services and the capabilities these alone can open up to organisations. If applications deliver good APIs through web services, then integration is made so much easier, be it integration “out of the box” with connectors, or more efficiently through actual developers and professional services.

Is Social Media leading the way here?

Yes…There you go, a nice short answer. Basically Social Media is leveraging web services (especially RESTful services) to allow integration between web sites / applications. Take the recent joining of forces of LinkedIn with Twitter. LinkedIn can now pull in your “tweets” and have these shown as status updates within your LinkedIn profile. Now think back to a business environment and you can see how using one application therefore effects data / content on another application / area of the business. This type of seamless integration is what adds real efficiency gains across an enterprise.

One Degree of Separation

When I founded One Degree Consulting, one of my main aims for the consultancy was to be able to provide consultancy services and solutions that delivered a single degree of separation between the end user, the data / content, and the functions they required to do their job. This may sound a little idealistic, but it can be achieved and should be the goal of business decision makers in all organisations. To be blunt, to achieve this, application integration is key and should be at the forefront of any decision making when it comes to IT based projects and solutions.

If Social Media sites hadn’t have seen how powerful joining forces could be and had maintained a closed API that couldn’t easily be integrated, then the whole point of Social media and sharing may well have been lost….Businesses, take a leaf out of their book, think integration for everything…Its key….





Document and file retrieval metadata

28 08 2009

Far too much focus is made today on providing complex retrieval fields within ECM solutions, and far too much is made of them from customers. For sure, inherited values and properties can be of great use, but when you start to look at your actual requirements, far too often retrieval fields are simply made too complex.

Points to remember

When designing your retrieval fields, metadata or indexes (whatever you wish to call them), keep in mind just what a user will want / need to do to actually locate this file / document. Here is a quick list to help you:

  1. How much information will the user have on a file?
  2. How much time do you want to allow them to enter search information
  3. How can your metadata fields actually assist in this
  4. What sort of results will be brought back and how clear will these be to the user (clear as in how can they quickly see the file they want)

Many systems recently spend a lot of time on very accurately identifying files, however, by doing this they also make it very complex at the data capture stage (scanning and indexing) and also require the user to spend longer setting up their search.

Keep it simple

When designing / identifying metadata fields for files, always try to make and keep things as simple as possible.

First things first, identify the types of files you are storing. This doesn’t mean pdf, word, tiff etc. rather it relates to their type within your business. So some examples may include personnel files, expense claim forms, insurance claim form, phone bill, customer details etc. (dependent on your business).

Once you have made this identification, we get onto the point of retention. How long will a particular file type stay “live”, then move to an “archive” then be completely deleted. When doing this you may find that you logically have some separation of files appearing. NB only create a new classification of file type if it is needed. Don’t do it as some logical separation, rather classifications should only be created to separate either groups of metadata or address such issues as migration and retention periods.

The tricky part is to now identify the metadata fields associated with your types of files. I would always suggest you try to keep these as simple as possible and try not to use more than 7 fields to identify a file. This is where often designers get carried away using inherited fields from different objects within the repository. This is all well and good and can really help in displaying search results back to users (or a heirachyy of files back to a user). However what I try to do is the following:

  1. Imagine you don’t know if there are other files out there in the system (nothing to inherit from)
  2. Identify at least one key field (policy number, customer  number, telephone number etc)
  3. Provide a list of options to the type of file it is (Date of birth certificate, driving license, claim form, phone contract, interview, recorded conversation etc)
  4. Only provide other fields that help logically identify this file from other files of the same type, or they help identify, for example, a customer entity within your business
  5. Provide as many “drop down list” options as possible. This ensures data is accurate and not reliant on spelling or interpretation
  6. Identify any metadata that may be “shared” with other file types. For example a Policy Number may be found on multiple types of files within multiple classifications of files. In addition Policy Number is unique within the business so therefore it can be used to tie together a number of files to a particular policy holder.

If you stick to these 5 principles you will find that 9 times out of 10 you will not have any call for using complex inheritance or complex storage concepts. You more than likely have also identified your classifications in full. Please note that your file types along with classification will also 9 times out of 10 provide you with enough criteria to accurately assign security information to these files.

Once you have identified how information is to be retrieved, think about what information could be automatically captured at the data capture side of things. This sometimes illustrates fields that could be used to help identify files at retrieval; it also sometimes identifies fields that really aren’t appropriate.

Showing results

Often your retrieval system will display results of searches in a format which isn’t always that great to you or your business needs. This is why there are so many “professional services” offered to customers of such systems. As a customer, linking objects together, even showing them in a “tree view” type fashion can help the end user. However, this isn’t a call for inherited properties, rather a call to logically display business related information.

Also remember different types of searches can require different ways of displaying search results. This is sometimes overlooked by designers and system providers to the detriment of the user experience.

Finally, always think past the retrieval process. Once a user has found the file they want they will need to interact with it in some way, this could be to simply view its content or to pass on to another user etc.

Conclusion

I am a firm believer in keeping things as simple as possible and often adopt that IT term the “80 – 20” rule. Far too often IT tries to deliver too much, and in doing so it over complicates areas of the system or worryingly the business. When this happens more often than not a project can be seen as a failure, when really, by delivering less the customer gets more.

When putting together metadata for the retrieval of files remember to try and keep things as simple as possible. Identify key fields and not get carried away in capturing too much retrieval data. Also, always keep your end user in mind, so that’s the end user at the scanning and index stage and end users searching for files. Sticking to these simple rules will ensure you deliver a file retrieval system that works efficiently, quickly and well for your end users and your business…





Do we need a web browser?

17 06 2009

There have been a lot of discussions I have seen floating around on Twitter etc with regards to HTML 5, and will it kill Flash and Silverlight. To be honest, there is no way this can happen, simply because both Flash and Silverlight do not rely on a third party to make them work. In addition neither has to conform to a generic standard which can hinder their functionality. Both have product roadmaps and both move forward at a rate that such a generic implementation could never hope to achieve. This means, the user experience will always be (potentially) better, and that’s the main aim.

However, both Flash and Silverlight based web experiences do rely on a browser. A browser has to be used by the end user to locate the web site, and then for the Silverlight / Flash plug-in to be executed. After that, the browser is pretty much redundant…

In the beginning

In the beginning of the Internet, a browser was simply used to locate, access and display basic documents, that were formatted in a particular way in which the browser would understand. (I know, I am making this very simple, but I want everyone to see where I am going with today’s post). This allowed people to access these documents that were stored somewhere and read them. If you think of a browser as Microsoft Word for example, and the HTML as the actual document, you start to see where I am coming from…

Browser wars…

Jumping forward, and into the web as it was a few years ago (before social media, videos etc),  the browser started to become an integral way of accessing content on the internet. Using HTML format for the documents, the browser allowed users to use an address to find that content, then interact with it (move around the website etc). Now this is all fine, if you have one browser, or a set of hard and fast rule of standards that everyone conforms too. But we don’t, in practice that is…

There are many browsers out there, which essentially have the primary of displaying HTML content to you, the user. However, as users we want more. We want to have options to store favourites, access feeds, personalise my browser etc etc. We also want websites to do “things”. We don’t want to just read content. So what we end up with is companies fighting for us to use their browser, which in turn turns into a bit of a nightmare for web developers as their supposed standardised HTML gets displayed differently in different browsers. Worse than this, some functions just simply don’t work in some browsers…

Does browser wars actually help end users?

Old way of thinking…

For me the web has moved on. We are already saying goodbye to web 2.0, and some smart person will term web 3.0 before long (which will actually mean nothing different to web 2.0 or even web 1.0…) my point is, the web hasn’t changed its implementation, only we as users have changed the way we use the web and what we expect from the web.

The concept of using a third party application to access content on the web is old. I don’t like it at all. I also think that using HTML or any standardised format to deliver applications is plainly wrong. As a developer you are always being “shoe horned” into a way of thinking and working which hinders the application look, feel, interaction, and therefore detracts from your users experience.

Internet websites are no longer formatted pages of information; many now act as applications and with Flash and Silverlight, deliver highly rich, interactive user experiences. With such websites, the browser is simply used to find the RIA (rich internet application) and start it. The application isn’t run by the browser at all. So do we need a browser for this?

HTML 5 is supposed to deliver the ability to show video for example. However, the same issues will still apply between browsers and websites; they will just now be even more complicated.

A new way of using the web

In my own mind, HTML should remain as it is today, however, with standards (especially regarding CSS) tightened. HTML is fine at delivering content, that’s after all what it was designed for. However, delivering complete websites, rich user experiences should be left to bespoke software, such as Flash and Silverlight. This form of distributed computing power helps the end user, and enriches their experience. I see no place for a browser on my machine, and would rather see the ability to browse the web as part of the underlying operating system.

Websites can then be developed in whatever technology they require, such as Silverlight or Flash. These technologies then display the website / application as they should. The web is used to provide access and download the application / content, no need for a browser…

I hear some of you crying at this point “how will a search engine pick up the content”, which is a good point. However, search engines must adapt. Why can they not interact with Flash and Silverlight? With the latter, the content essentially is stored as xml, so it’s not a massive leap. Also, what’s stopping search engines from picking up on tags that describe the content fully, still within the hosting HTML?

HTML shouldn’t be seen as just something a browser understands, rather a format the operating system itself understands. Once this happens, and we use the web to distribute applications and information in this fashion, many of the headaches of the web will be removed, and we can truly open up the potential of distributed and mobile applications / rich experiences…Silverlight 3.0 already delivers an out of browser experience, so are we far away from this ideal?





Unstructured repository Vs Structured Repository

24 04 2009

I am talking here about Content Management repositories or Enterprise Content Management repositories (ECM).

This is a subject close to my heart; after all if I hadn’t started designing my own ECM repository a number of years ago with our technical director, I wouldn’t be here writing this blog…(tad cliché I know). For us, it was a passion to design the repository, and we felt that we could deliver a repository that met so many requirements at comparative low costs to customers…

I have spent sometime today wondering round some posts about repositories, structures, design principles, open source etc. Most of which I found quite interesting, however, as usual, these articles are all so very very technical. Masses of space are used to argue one particular implementation /API benefits over another, and don’t really look at real world solutions. Especially when talking about repository design or structure.

I have read quite a bit about how a repository designed with no “structure” specifications is far more flexible than one that requires structured specifications. For me, this is just not looking at the whole picture. I fear like so many IT articles, that technology and new methods of doing things are hiding the requirement for good, solid design.

 

A solid specification

Your business applications (ECM applications in this case) need to know exactly what sort of information is being requested by the user, after all, they know what they expect / want to find. Because of this, a good ECM repository MUST allow designers / administrators the flexibility to define different types or structures for content. Now I know there is an argument not to do this, rather specify query objects to locate the content, however, I find this a less elegant and messy approach. I also feel that this detracts from the number of important services the actual repository can offer, such as retention period specifications.

Even with structures being required by the ECM repository, it can still act as a generic repository and deliver equal flexibility. The flexibility to define different structures must, however, be extended to the point where each property of that structure can contain its own definition / structures. If your repository can do this, then all the flexibility in the world is at your finger tips, in a strict and managed fashion. For me this is a solid approach to designing and specifying a system that can grow with an enterprise and deliver great performance.

 

Query and Create

Whenever you query an ECM repository you should think about how the actual user will make the query, and what they will expect. You don’t want to be too strict with what they can and can’t do, but you don’t want to be so “loose” that you risk performance implications. With a structured specification approach, as laid out above, your repository provides you with the options to allow the user to structure their query as much, or as little as they choose. In this way, the user is informing the system of the resulting structures they expect. It also allows developers to build highly efficient queries for repository based applications.

When users need to create content within the management repository, this is where I see there could be an argument for not specifying types / structures for content. If you have a structured type expected, the user creating the content has to state values for the expected structure. This could be a little time consuming for users and a real issue if you have vast amounts of content that needs to be constantly added to your repository. Please note though, I don’t see this as an argument for unstructured design, rather an argument for good data capture processes and high levels of automation.

If you have a “known structure” being created in the repository, you have greater control over how the repository can work. For example, if you have known structures the following can all be known and therefore provide valued services:

·         Encryption type for content

·         Streaming rules (if required)

·         Digital Rights / Licensing of content and media

·         How and where to physically store the content in the repository

·         Specific migration and retention periods.

 

Accessibility, Applications and API

ECM repositories often require integration with other applications and services. Because of this, they really need to deliver a good API. For me, a good repository delivers a full API. The API also needs to deliver flexibility to developers to use the repository as they see fit, though without compromising security and support of repository functionality. If your API does this, then developers will be able to develop in their own style and with the freedom they require. With a good full API, developers can leverage that repository in new ways, extending the value that repository can provide to an organisation. This is key to any organisation that understands the importance of ECM.

However, your API cannot be proprietary to a particular technology or platform these days. This is where web services really help. We ourselves have spent a great deal of time ensuring our workFile Vision repository delivers a full web service API, so much so, that all our own applications only interact with the repository through that API. We know that if all our applications work in this way, there is no integration requirement that cannot be met.

Conclusion?

So what have I tried to say in all this rambling. Well, if you are thinking of investing in ECM, make sure you look at how the repository is designed, what performance can it deliver and how scalable is it. Unstructured repositories will struggle to compete with well designed repositories that allow the structured specification of various types of content. Also make sure that your future requirements are not hampered by a lack of a flexible and full API.

If on the other hand, you are a developer / designer looking to create your own content management repository, make sure you think about what you want to achieve. What expectations you have. While an “unstructured” repository store sounds attractive and highly flexible, it may well cause you issues in the longer term, especially with retention periods! Also think about interoperability. If this is more than just a design / development exercise, you are going to need to provide an API, one which can be consumed by other technologies and platforms.

 

Parting shot…

In the end, a good repository design should provide users and developers alike, with as much or as little structure and flexibility as required…