workFile Vision. A change in direction

12 11 2010

Today’s post is very much centred on Business Process Management (BPM), Enterprise Content Management (ECM), Customer Relationship Management (CRM)…

 Some of you may keep an eye on the news from my company, One Degree Consulting. If you have, you will know that our workFile ECM & BPM side of the business (platform) will be going through a transition phase in the coming weeks and months. We have effectively torn up our existing road map for version 2.0 of the workFile Vision product, and put together a new one. This new one with some big, well massive, changes to how we see the future of IT in business, the future for business solutions, the future for SMEs access to solutions and consequently to the Vision solution itself…

In the coming weeks, workFile and One Degree will publish more information on the changes, and the effects these will have on the Vision suite, and how these big changes will provide benefits to business.

In this post though, I want to give a quick outline to what some of these changes in thinking are, what the changes are in the Vision product, and what the drivers are that led to this drastic new thinking…

Single Silo…That singular degree of separation

workFile is, if you didn’t know, an ECM and BPM platform. However, it also allows records management and with that, the ability for CRM to an extent. Other business focused modules are built on top of the records management capabilities. However, all of these are very separate modules and silos, only aware of small fragments of data that can be shared between the two, effectively linking that content and making it of bigger use to an end user…

So what’s the big idea? Well the big change is to move away from a multiple silo approach, and to bring these different elements closely together, effectively delivering a single silo solution for ECM, BPM, CRM, Records Management, and dynamic content processing and capture. The CRM module will be a thing of the past, and a dedicated customer focused section of workFile built (not on top of Records management functionality not seen as a separate module).

In essence, ECM, BPM, CRM etc will become modules of the past, superseded by a new way of looking at how we work as individuals, teams and as an organisation, and also how your organisation communicates and engages with its customers…All of these elements seen as one…

So how do we achieve this with the new version of workFile Vision?

Through state awareness, user empowerment and adaption. The concept here is to ensure true state awareness between the user, the customer, the content and the process. BY process, I don’t mean a rigid path, which work must follow, rather a process guide, which is highly adaptive to the content needs, the needs of the customer and the needs of the user.

In addition, the singular UI and underlying capabilities of workFile – to allow real team working on items of work, makes life a lot easier for the agent to collaborate and process their work. This may not sound like anything that new, but it supports newer ways of working. We have a vision that people will work more as teams on individual pieces of work, effectively pulling together on items of work, not in a collaborative fashion but in a real sense of working together. This is a big move away from BPM and Case Management as it is today, with the concept that we work as individuals and move work along at the centre of work / process thinking.

Max J Pucher has a great article on the future of work, in which he talks of users “swarming” to do work. In it he also states that by 2015, 40% or more of an organisations work will be non-routine, which is currently at 25%.  Take the time to read his blog, it is very informative… Have a read of his article, http://isismjpucher.wordpress.com/2010/11/12/the-future-of-work/ )

More than a single silo…

A single silo that supports content, customers, additional records and the process information is the best approach. In addition, interconnectivity and multiple feeds of data will mean not only will users need greater perceptive skills, but their software needs to be able to deliver this to them in an easy to identify and work fashion.

workFile though provides real flexibility in terms of content, status and structured data. This allows the flexibility to teams to create new structured data records on the “fly” and in essence joining them directly to their work (which could be content based, customer based etc.) This may all sound complex, but essentially it is quite simple…Its how we would naturally work without the rigidity of structured processing…(BPM).

Distribution…

Though we are moving to a single silo, this doesn’t mean a centralised solution. On the contrary, we believe that departmental distribution is key to freedom and success. So workFile will support a greater level of distributed processing, with departments being able to create their own content guides, their own process guides, rules etc. But, this doesn’t mean we are allowing duplication. Commonality between departments will be identified and illustrated, and wherever applicable (and suitable) shared between them.

It’s a team approach

Working in “swarms” sounds quite fun, but in essence it means tightly knit teams, working together quickly and efficiently. Traditional BPM presumes we work on pieces of work as individuals, then move it along to the next person. Sure occasionally we will allow “branches” in the processing, or splitting of items of work, but it doesn’t support multiple people working on the same piece of work at the same time. So, with this in mind, Vision 2.0 will support a more team approach to working, and will ditch the rigidity of its traditional BPM platform, which was used for defining how users work.

Social Media

While social media is taking off, organisations either see this as some wonderful marketing tool or as something they need to get control of. However, social activities and social media sites, conversations etc are becoming increasingly part of a team’s working day. These conversations and interactions aren’t carried out at a set time, they aren’t structured in their content and don’t form strong ties between you as an organisation and your customers. In addition, they are often disjointed, with an organisation not being able to tie social media engagement with a customer, to a customer record for example.

So the trick is to ensure interactions can be processed by the right people, that the right people provide good information, and that Social Media is seen as a form of engagement and conversation, not just free marketing. In addition, the content generated from these interactions allow a flexible way of working, after all, the customer may send requests that don’t follow a strict pattern, and as such, the user must be able to facilitate these requests flexibly. This content should also be recorded and brought into the solution, so that other team members have all the information they need to help….

workFile will become a lot more social, interacting with typical social media websites, and allowing users the freedom to interact in an expected fashion.

Flexibility, adaption and yet accountable

Organisations and management want to have full control, however, if they do, things become too rigid, too centralised and ultimately inflexible. So, the solution is to trust our workers, to empower them and let them do their jobs. Sure we need to ensure quality, service level agreements etc. but this can be done through guidelines and empowering users. Accountability will always still be there, with solutions recording all interactions and use. But the point is, the user has the power to process the work how they wish (to an extent obviously, certain rules have to be in place for compliance).

The big winners of Vision 2.0

So who is workFile Vision to be aimed at? Well the big winners at first will be SMEs, simply because workFile is used mainly by organisations that fall into the SME category (with the odd exception). The new version will be able to drive the cost of IT and these types of solutions down for SMEs…

However, larger organisations can easily benefit from this new way of thinking and working. If anything, while SMEs will see benefits due to a smaller investment, larger organisations will not only share in this benefit, but will also see dramatic increases in productivity and efficiency. All of this with the reduction in administration and licensing costs…..See, we didn’t call it Vision for nothing.

Finally, a change in name…

Finally, the workFile ECM & BPM platform name will be no more. Though Vision is the product suite, both the terms ECM and BPM will be replaced from the workFile company name. Why? Simply because workFile will offer a lot more, and it deserves a new description of what it delivers…The marketing people can think of something I am sure….





HTML 5, Flash, Silverlight, The Cloud…The future is here?

8 11 2010

I.T. seems to be at one of those cross-roads in terms of how people use software, where they use it, and how and where they choose to store their data.

There has been a lot in the press regarding HTML 5 and I have posted some thoughts on this in the past. There has been equally as much speculation as to the future of technologies such as Flash and Silveright and whether they are now redundant technologies as HTML 5 moves closer. In addition to these, rather large discussions, we are also talking about moving content and software away from traditional servers and PCs, and handing control over to the “Cloud” and “SkyDrives” etc…

So this post is looking at indicators of where we may all end up based on feedback I have received from businesses, the general public, phone professionals and my own thoughts…

HTML 5

This is the easiest one to start with really. HTML 5 will be here, at some point. Many say a lot sooner than I personally believe and many (as there always are) saying this will change everything (which it won’t at all). What HTML 5 will do, is simply to replace the need for browser plug-in to enrich a users web experience to an extent. For example, we will no longer typically use Flash or Silverlight to just stream video, give our website some pretty animations etc etc. Some will argue that’s a good thing, and if you are a purist (in terms of open environments, using only HTML to deliver content) then yes it is. For Video and animations, yes it is a good thing…

However, there are big problems with the whole architecture and the way HTML and the web in general works. The problem here is the web browser. When the web was conceived, the browser was simply an application that displayed some content, it wasn’t to be used as an environment in which processing can occur. But, we are here, and the browser is used to run “script” and to initiate communication between the client and the webserver…HTML sets out standards, but, with everything, with multiple choices (in terms of browesers here) you get different results. No matter what standards are in place, web browsers handle, and will handle the same HTML and even script differently to each other. This is a horrendous state of affairs, meaning that the same website has “allowances” for multiple browsers. This isn’t good…From an end users point of view, “who cares”, but from a development, maintenance and cost point of view, this is not acceptable really. Even if the browsers did handle it all the same (or got very close), testing would still need to be made on each browser platform, and for every time a new browser is released / updated. But this is where we will still be with HTML 5, don’t listen to any marketing hype or to any so called “experts” on this….This is simply the facts….HTML 5 will not change the web for us at all…

Silverlight and Flash

HTML 5 will have a big impact on Flash I believe, after all sites that utilise flash do so to enrich the website. HTML 5 will do this, and unfortunately for Flash, developers will adopt this and leverage it before they look at Flash. So where can Flash go? Well there are still many things Flash can offer that HTML 5 won’t be able to, or at least won’t be able to offer consistently across all browsers. Because of this I see Flash filling small gaps that HTML 5 leaves (the same applies to Silverlight). I do think though Flash will see a massive reduction its use on the web, but will maintain its use for presentations, short movies, and games.

Silverlight is a little different. I have never really seen Silverlight as a pure web technology, and those out there who keep comparing it to Flash or HTML 5 obviously know nothing about Silverlight. Sure Silverlight can give you animation online, deliver RIAs, stream movies etc (all that Flash and HTML 5 can do), but Silverlight has a lot more to offer. The architecture behind Silverlight I feel is spot on. It mixes both the worlds of Desktop and Web seamlessly, effectively delivering desktop applications (with all their power) via the internet for installation, communication and maintenance. This is very different to HTML 5. Because of this, developers will use Silverlight for business applications, for RIAs that need to do more (integrate, carry out complex functions etc) and all without the reliance on the browser or server doing processing jobs. This reduces testing and ensures a single code base (and that’s how it should be). In addition, you get frequent updates, and full support from Microsoft, which again are good things for real developers…

There has been some confusion as of late (mainly in the media and Microsoft haters) as to the value of Silverlight to Microsoft and the fact that it is also used on the new Windows Mobile & platform. Let’s get this clear, Microsoft will concentrate more now on HTML 5 as HTML 5 is a big online technology, and it needs to keep up with others. So this is no surprise. However, Silverlight is and will remain a core development platform for the web, RIAs, Out of browser applications and experiences (which it does now). Sure the Silverlight team will also now work more on its Mobile use and adoption, and that’s because they need too. So all we are talking about is prioritisation of the progression of Silverlight. This is clear from reading up on Silverlight, looking at Microsofts future plans and listening to what is said rather than reading between lines when a press release comes from Microsoft…Silverlight will become increasingly more important to Microsoft in the future, as more developers realise that they can use a single platform to code for the web, the desktop and mobile devices…

Cloud computing, SaaS and SkyDrives

I mention SkyDrives here as that is what Microsoft terms your cloud computing storage space with Windows Live and on your windows mobile 7 phone.

I think in the past couple of weeks, I have had more feedback than ever before on the cloud and its use, from both businesses and the general public.

So let’s look at businesses. Businesses cannot move everything over to the cloud, it’s as simple as that. There are savings to be made via the cloud for business, but it has to ensure that it can move those applications and content to the cloud. That it doesn’t already have a cheaper alternative, that it can trust the cloud providers security measures etc 100% and that there is a way to port to other providers in the future. All in all, business is still wary of this and why shouldn’t they be. I see businesses embracing the cloud and SaaS for smaller elements of their operation, ones that do not require so much compliance and that are not that critical to the organisation. This is not a bad thing, rather it is a good thing, the cloud here allowing IT to provide better solutions to the business at lower costs. I don’t believe the increased popularity of the cloud will translate to vast amounts of an organisations data or services being moved to the cloud. Rather the cloud becoming another IT implementation option.

So what of individuals. Well only last week I posted that individuals may well be the big winners of cloud computing. But even here, individuals are more sceptical of cloud based services. It seems that keeping some photos online, music and videos is fine. But when it comes to more personal documentation, you cannot beat a good hard drive or storage device at home. Because of this, I don’t see the masses adopting cloud computing and sky drives….Google may want us all to use the Cloud for software and storage, but the simple fact is, we like control over everything. If our data and content is only in the cloud, then we feel vulnerable, not just to theft, hackers or work colleagues finding things out etc, but also to cloud providers themselves. Let’s face it; Google has an appalling record on data protection and our privacy.

So what is the right usage for individuals? Well Microsoft though I feel has pitched it correctly. Providing 25Gb of space in a sky drive to windows live users (perhaps a little too small really). This is enough space for most people, sure it could be larger to allow us to synch a lot more content, especially music and videos. But it’s a good start. I also love the fact that my Windows Mobile 7 phone provides options to just take a picture and have it stored in my skydrive and not on the device. But, I still have enough space on the device to cart around with me a certain level of music, pictures etc etc. (No doubt this amount of storage will grow). So it’s a nice blend, one I personally am comfortable with, and one most people I speak too are comfortable with…

Conclusion, if any?

It seems in IT, too many marketing companies, experts etc provide to much hype. Everything is also “brilliant” or “rubbish and a fail”. It’s either 100% the way of the future or 0%…There is never any middle ground, and it is the middle ground which actually is where we are heading, in terms of our web usage, devices, online services and storage…And there is nothing wrong with that at all…





Those BPM professionals in the Grey area. Business or IT?

5 11 2010

I have been partaking in a discussion on LinkedIn, which asked the question, “Why majority of BPM projects fail?” Now the answer is simple. If you don’t have support for the BPM project from all levels of management, and you don’t have a champion pushing it to succeed within the business, then the projects has a real hard time to succeed. And this is because it’s all about changing the way people work, people almost always have to be forced to accept change…

Anyway, one of the issues thrown up in the discussion thread, is the confusion many have between who are IT people and who are Business people. Many comments talk about IT, but really they are talking about Business….So why the confusion, isn’t the difference obvious?

The middle man…

Ok, so you are starting an IT project, let’s say it is a BPM solution. The vendor employees turn up on site and start to look at your current business processes, the way you currently work and they try to understand your business. Now, they are working for essentially an IT company, and they are working on an IT project, with the aim of delivering an IT solution to a business need…But are they IT?

Many people see these people as in IT. Most themselves would say “yeap, I work in IT”. But the reality is they fall into the grey area. Essentially they are BA’s working on an IT project. I see them  as Business people. They don’t need to know a thing about IT, technologies, concepts, solution architecture etc etc. But they do need to understand business rules, the nature of the business and how the people work within that business. So they are business people, right?

But these same people also design new business processes, that’s ok right? But it goes further; they also model the new processes, showing exception handling, essentially building the rules into the IT solution. They also put forward their own thoughts on how things, and where they should, integrate with other systems. You see the grey area appearing? Essentially they are using administration tools within the IT solution to build the IT solution to meet the business needs. This really shouldn’t be happening. I know many designer tools are out there to allow this, and they see the BA role as a business one. But the presumption here is that the BA is all knowing, in terms of the business (which is wrong, many times they can’t and won’t discover all the processes and sub processes) but also that they are all knowing in terms of the IT, understanding IT architecture, integration issues, exception processing etc. So our BA is not really in IT, but is presumed to know everything about it. I can safely say I can count on one hand the people I have met who can do this role…

So based on this model, the IT geeks (as they seem to be called – which is a little harsh), get to work and build some integration in, build some robot apps etc, all depends on what has been modelled and how flexible your BPM platform is I guess. Oh, they are also the ones who seem to get the blame if something is missing in a process, little harsh…

That grey area

So these BA’s are not IT, they don’t need to know anything about IT to do their job. Yet they do need to understand and learn business, and how an organisation does its business. So they are in Business….But that grey area arises, because they then jump into the IT side of things to help build the solution, they essentially get the IT ball rolling, often on their own…

What’s the issue?

The issue is that a vital step gets missed out. And that is essentially where the real IT bods get to look at the business needs. Where the business needs are communicated directly to IT. Really IT should be involved before new processes are mapped out. They can foresee any technical problems, they can look at other ways of working that could be even more efficient and have a big impact on the future processes. It is this stage that gets missed, simply because the IT project relies on the BA to know everything….And that’s because, people from the organisation see these BA’s as IT. And IT sees them as the business…

The designer makes things worse

The designer is one of the big culprits here in BPM solutions. Because it is “Business focused” or business facing supposedly, BA’s build the maps and model the solutions. If that tool was not available, then they would need to communicate with IT. This then enforces that missing step…

This isn’t my only issue with the designer. I have posted a few times about how restrictive it is and how it can have a negative impact on the efficiency of the solution…





Are Stored Procedures Good or Bad? And when to use them?

8 03 2010

There are a hell of a lot of articles / posts on “Stored Procedures are bad” or “Stored Procedures are good” all of which are written from one aspect or point of view. They never really offer much in terms of when or when not to use them for a developer (or someone at Uni or even for an application / system architect re-thinking things).  You can almost always tell the area of IT in which these people specialise in (or have their most experience) just based on their views on this subject….So what is the big problem? Why the discussion…..

Well, people like to have “golden rules” or “answers”. In this case, there isn’t a particular view that is right, nor wrong, rather only the correct view for a particular organisations requirement. Don’t leave the decision in a data architects hands, nor an application developers (both bring different skills and views to the table). So when embarking on a new project, or a database overhaul, you really need to bring together a number of key personell to ensure you use your database – and stored procedures wisely….There is no golden rule of “Stored procedures are evil” or “Stored Procedures are great”….

Understanding the real pro’s and con’s

Before you can really determine when best to use a stored procedure, you have to think about what are the benefits of using them and the negatives. Ofcourse – this is where troubles arise – some people have arguments that are valid – other ones not so valid. So lets break these down….

Pro’s

Well first off, lets think about learning this rather than going based on “what we have learnt in the past”. So, we can all agree that modularisation and de-coupling is a good thing. So de-coupling our data from our applications can only be a good thing. Great. Now how do we extract that data in the most modular fashion and having minimal reliance on anything else. The answer, a Stored Procedure. The stored procedure provides data architects (and other database roles) with the ability to maintain the database as a seperate entity (as much as can be possible). This provides great flexibility along with other benefits.

Now some will argue this could be done in a DAL (Data Access Layer), which is true, however this means that we have in effect “coupled” our database to another external layer. Now who will maintain this DAL? How accessible and easy is it to update especially if a number of LOBs are dependent on it? In addition, a DAL will be written in what language? Languages change far more frequent than our actual data – many companies will have data that is a hell of a lot older (maybe 40-50 years older) than reletively new languages. To illustrate my point, if you used a DAL written in COM+, how easy would it be for an organisation to find someone to write a web page that utilises this COM+ DAL layer in todays market place? Not great. However, can I find someone who can code that web page and have it interact with a stored procedure – oh, yes I can very easily…

Another pro – is that you can “tune” your stored procedure without really having an effect on calling applications. No code needs to be re-compiled, deployment made etc etc. All that is required is a competent data architect or administrator. This ability to “tune” is because the database doesnt have the overhead of being linked to a particular layer (or GUI if bound).

Performance is sometimes mentioned here, especially as stored procedures are compiled in some cases. Now, in recent years dynamic SQL performance has caught up – however, lets look at the bigger picture of performance. I have worked on a number of projects where a DAL layer was used, and yes, it executed the query almost at the same speed as our SP (most occasions the SP was still faster), however, for more and more complex queries and business rules, the DAL layer became slower. Why? Basically because it used data in-efficiently – bringing some data back, passing it to a business layer to implement rules before making another call to the DB. In such a case – this “technically” is the better solution – with business logic sitting outside the database, however, for performance, placing it inside the database provided far greater performance….

Finally, who actually writes the stored procedures? Often it is someone with extensive experience in this area, someone who can write the SP quite quickly and accurately. Ask yourself who will write it in a business application or DAL? More often than not the junior developer , (that could be a little harsh) or people who dont have as much experience of SQL as they could have – or experienced people who just dont have that much extensive knowledge of the data model / its architecture….In all of these cases – this can lead to poor performance queries and ultimately poor application performance….

So the supposed con’s

Many feel that you have a “vendor lock-in” which on the web – everyone hates (it seems on the web most people want the earth from their software for free and for it to be maintained by the most ethical people in the world who do it to help developers etc and not for a penny..hmmmm). In addition, how often does a company actually migrate to a different database, not often. In addition – if you write your stored procedures (sticking as closely as you can to more standard SQL) then migration may not be such a big issue. To be honest, if you migrate data – if you have a DAL or application that tie to a particular database, good luck re-coding and testing your dynamic SQL there…

Algorithm tuning is also sometimes seen as a negative, stating that millions of people (developers) tune SQL and only a handful can do that for stored procedures. This is really misguided. Developers who write SQL (even in their millions) will not bring to the table anyting more in terms of tuning than a handful of good database architects…

Security…Oh well, this is a classic issue. Many think because it runs in compiled code it is more secure. True to an extent. But how many people (developers) has access to this? In addition, database security is not as bad as it once was, with complex user/role security in place you can really lock down a database and it will only be accessed (admin rights etc) by far fewer people – with less knowing the full database schema. This last point, is probably a negative one – especially if your organisation looses a few good DB men…but its key in such cases to ensure you get proper handovers completed (be responsible as an organisation).

A big, and valid negative, is that you have far greater “power” with dynamic SQL and in a proper language. You can perform greater calculations, applie business rules etc etc. SQL is a little, well restrictive. But thats a good thing – see the next section…

Stored procedures can be highly restrictive – especially when you have a very dynamic database schema. By this I mean one which may well update itself or an application updates it due to business requirements. These are more common than you might think – though by no means are they the norm. In such cases, stored procedures and SQL are far too rigid and lacking in functionality. In such cases you really will need your own DAL…

So when to use a stored procedure and when not to…

As you may have guessed, I am more in favour of the Stored Procedure than not. However, there are those occasions when they just arent a great idea…Its identifiy when to use them, how to use them, and when not to use them for your applications that is key to wheather they are good, or bad for you…

So when / how to use them:

  • Use them for basic and moderate complex typical functions (insert, delete, update etc)
  • Keep your SPs as standard as possible and not over complex
  • Dont be scared to have stored procedures that do contain “business logic” (do not get this confused with application logic – they are different)
  • Use SPs, triggers etc to enforce data integrity (your data will last longer than your applications and their chosen language)

When not to use them:

  • When your DB schema is more “dynamic” and therefore requires a more powerful language with features to understand how to use it (insert, delete, update)
  • When your application requires a lot of application logic from the extracted data
  • When you need to utilise a greater / more unique security model on the data (though in this case you may well use a mixture of dynamic SQL and stored procedures to get the job done)

With all of this in mind, to answer are Stored Procedures good or bad? They are great – when you use them correctly…..They are poor when your schema is too dynamic or complex and they are evil when you use them incorrectly….





In browser ECM / over the web ECM

2 03 2010

I have been asked to talk a little about browser based ECM solutions, or environments and I thought, why not…First off, browser based ECM interfaces haven’t always been a great hit. In the early days of the web, web based applications were rather clunky, requiring lots of moving around pages to get simple tasks completed. I am not going to talk about the short comings of the web for applications as that is well documented, but, for ECM this environment proved that many web based solutions were slow, hard to utilise and, well, very clunky…

Why are ECM functions hard on the web?

Well the basic functions aren’t that hard these days. Since we have all moved along with how to use the web and our expectations of the web, so have web based ECM solutions – they have improved drastically. However, the problem is that ECM encompasses so much, not just document management facilities, rather the complete enterprise worth of content, in all its forms. Add into this the possibility of Social Media based content and of course Business Process Management (or workflow)  and you can see how this gets more and more complex. I haven’t even touched on extensibility yet either….

So why are these things harder on the web, well they are because of the restrictions the web places on applications. The biggest restriction is the web browser itself, and follow this up with security requirements and you can see why the web becomes almost suffocating for very free content based applications…

The benefits of browser based ECM

Simple, almost no installation on the client machine and the ECM platform can be accessed by any machine with an Internet connection. This means administering the system is a lot simpler and can be moved outside your normal server based type implementations. In theory, if architected well, you will also save on user licenses as the web is “stateless”, meaning you should not have to hold a user license when you aren’t actually interacting with your ECM repository.

However, don’t think you cant utilise thin client type implementations and have your UI in the web browser. You can move web based applications out of the browser with technologies such as Silverlight. This means you get the benefits of the web, without all the restrictions (especially if you choose to run in a “trusted” mode).

 

Good solutions…

If you have and ECM platform that is rather old in its underlying technology (I can’t think of that many that aren’t) you will probably find that their web based solutions are a bit of a “hack together”. The main reason behind this is that technology, programming methodologies etc have changed greatly in the past 25 years, along with user expectations. This doesn’t mean these solutions are bad, rather it means beware that they may limit you in some way compared to newer platforms…

So what good solutions are there that run utilising the web? Well I am not going to list any or do anything like that, rather I am going to suggest that when looking at ECM solutions you think / investigate the following points.

  1. Technology used to deliver the interface into the web browser
  2. Do you have to run your web application in the browser?
  3. Out of the box capabilities / configuration
  4. Extensibility of the out of the box type interfaces
  5. Distributed processing
  6. Integration capabilities
  7. Administration

 

There are more, but I want to keep this post from becoming some kind of white paper…

What you will find is that when you get down to these questions – you will find there are still limitations for many of the ECM players when implementing over the web.

 Administration

Many web based solutions are just that, web based. However, administration and the real complexities of ECM are still delivered primarily through a traditional application (which may be installed on the server). To be honest, if you are a web based ECM provider, all features including administration should be capable through over the internet…

Distributed Processing Power

Remember the point of a web based application is that many people can connect to it, it’s available to all that need it. However, some solutions place limitations on the number of users connecting via a web server, why? In addition, some are highly restrictive with regards to what components are installed where, again why? What you are looking for is real capability to share processing power for the system. This can be in the form of P2P (a valued contributor to my posts strongly recommends this – Max J. Pucher), or distributed service architecture (my own preference). Both these methods provide vast scalability and performance and these are key when you think about the web and implementing solutions over the web / intranet…

Application Configuration

Many web based solutions provide a single look and feel and don’t allow much application based configuration. Because of this, developers traditionally built their own interfaces based on customer requirements and delivered these, making the interfaces cleaner, more relevant and incorporating such business requirements as field validation (this is always more evident when looking at web based solutions).  However, this isn’t what I am driving at. Ideally, you need the user to be able to configure parts of their user interface. This could be query forms for an example, or where menu options are displayed etc. The point is, once the user has the flexibility to configure parts of the UI, then their productivity will be increased. This is a key point, especially when we talk about my next point, extensibility.

Extensibility

This is a big big thing. Traditional ECM applications (including those not on the web) provide extensibility through their API, allowing developers to deliver applications that integrate with other LOBs, add business rules etc to the customer’s requirements (within a new application for the customer, not the “out of the box” product). This is a minimum when thinking ECM.

However, the real requirement is that the “out of the box” product, allows business rules and applications to be plugged directly into it. This is so important for ECM based solutions, as ECM within your organisation will grow and include more and more areas of what is termed content. In addition, why not allow the customer to add their own modules in there, or VARs for example, extending the way in which the application and ECM is used…

Plug for workFile Vision

Using the web for ECM is a bit of a passion of mine now, and it is one of the key driving points behind our own ECM platform (workFile ECM – http://www.workFileECM.com ). When working with many other ECM players (as a consultant) I did notice short comings and wanted to get my own platform together that was designed for the web, pure and simple…workFile ECM is a baby, and already we are improving how it works over the web… One of the restrictions to the administration of workFile ECM based applications was the web browser itself, with our own modeller / administration application working in a browser – but in a somewhat clunky fashion.

Things have moved on, and our workFile Vision repository and application takes the next step, staying on the web, but moving out of the browser…

By doing this, we still maintain all of the benefits of distributing an application over the web, however, we also have the added flexibility of running outside of the browser and providing features that can only be made available when running in a “trusted fashion”, such as integrating the web application with thick client applications (take Microsoft Office for example).

In addition, workFile Vision is fully extensible, providing an application framework that allows developers to design new modules and have these plugged seamlessly into the interface. This is to allow the ECM platform to grow with the customer’s needs seamlessly and without developers needing to re-write / re-design modules and applications. Taking this further, all modules can be configured by the user, for example allowing them to design the layout of a repository query form…

Though in the late stages of an Alpha release, workFile Vision 2.0 will deliver everything you would expect from an ECM platform, but much more in terms of the web, extensibility and scalability…Exciting times….I will keep you posted….





Business application UI design

19 02 2010

Now this is my first post on this topic (I am sure many more will follow), and I want to talk about some fundamentals with user interfaces within business applications. More importantly, I want to distinguish between UI that looks great in a demonstration, and UI that is great to use…Trust me, they are not the same thing…

Traditional business UI and poor design – it’s not good

When presenting your business application, the first thing it is judged on is the way it looks. It’s a simple fact, if it looks awful then people just aren’t going to love your application even if it’s by far and away the best thing out there in terms of functionality. It’s also worth remembering, that a poor UI will slow down users actually using the system, which is never good – it leads to lost time, poor efficiency, poor views on the system and ultimately frustration.

Typically, business applications UI are functional, and nothing more. All the fields the user needs to access are shown (hopefully in logical places) and the screen is often that lovely grey colour. There is nothing “flash” about traditional business application UI, however, functionality is no excuse for not delivering a great end user experience. This is something that is becoming increasingly more important with business applications – and one of the reasons is probably because of the wide spread use of websites that look great…

UI that presents well, but is it great to use

With WPF and “richer” UI environments (we can include Silverlight in here), UI design for business applications can really add value to the user experience. Screens can become more user friendly, intuitive to use and give the user greater feedback and guidance. However, you can take a good thing too far – and this is something that I have started to see quite a bit of…

Because as designers and developers we have the tools to create something that really has “wow” factor, should we? There is a time and a place – and it is key here to remember what the actual use of the system is, how often users will be using the system (or just screens), how experienced users are and how quickly they can complete their tasks. Let’s look at a real basic example:

Let’s look at searching for a customer’s record. Now I have seen some great and out of the box ways in doing this. Some include browsing and dragging cabinets and records around to allow us to navigate our way to the record / search areas of a system. Others use carousels that act as a “wizard”, with each selection bringing a new set of carousel options (identifying customer type, then account type etc before providing a search screen)…These demonstrate great, they will knock the socks off of the directors and you are a winner….You have put together something different, something that’s intuitive, and something that looks great. Well done…

However, lets now look at this in the real world…I have a user who wants to search for a customer quickly; they may even have the customer on the end of the phone. So is dragging objects around to build a search, or navigate to a search a great idea? Or will they find this restrictive, slow and ultimately frustrating?

Just because we have the tools to build “wow” UI and animations – we shouldn’t feel we have to use them and businesses shouldn’t expect them to be included either…

UI that demonstrates well, but is great to use…

This is where we need to get to. UI that is operationally great – it allows users to work quickly and efficiently and the bells and whistles that are available with WPF, Silverlight etc used to aide in the user experience and bring real value to screens.

Let’s look at our customer record search. We could have a UI that contains a shortcut key to a search panel, which could be slid onto our screen from within any other screen / module. The user can enter some quick key information and be presented in a new tab with search results…The user has in a couple of key strokes called up a search, and found the customer. They can now get on with servicing the customer’s request and then back to whatever task they were working on before. Now this doesn’t look half as flash as our earlier UI, however, it looks good and the use of the “bells and whistles” has added to the system functionality wise – as well as to the user experience.

 

Conclusion…

We have to remember how the user works when designing screens – something that can be lost when going through requirements and what could look / demonstrate great. If your user needs to work quickly, or spends a lot of time in certain screens, then you still can’t beat short-cut keys and use of a keyboard. Touch can help but ultimately, navigating through great graphical based interfaces can be slow and frustrating…Flash new developer and design tools are there to help, and should be used when needed, not for the sake of using them…

For business applications the design rules should still be, “what works quickly for the user” and then, “how can we make that look and feel better”…





Redefine the way we use the web, to unlock its potential…Web 3.0?

6 02 2010

This is something I have been thinking about for a number of years now, but more so recently with a lot of talk of HTML 5. Basically we haven’t really changed the way we use the internet (from a technical point of view) since the web became mainstream shall we say. Sure, we now use it in new ways which we hadn’t dreamed of (habits and the way we communicate with each other), but essentially the web still works the same way it always has. We use the web as content rendered as HTML that is displayed back to us in a web browser. Even if HTML 5 is the magic version and delivers so much more in terms of animation and streaming has it actually changed the way in which we use / the web works for us? No…

Let’s not go back to the good old Mainframe environment…

It seems more and more IT professionals and large organisations see the web as the new mainframe, especially when you start talking “thin client” and “cloud computing” (the cloud could be seen as our mainframe..scary). When you start looking at mainframe environments and then cloud and thin client computing, you see that the basic concepts are very similar. So what do I mean, well, all of the processing happens on a server, the machine you actually use to access it, doesn’t really have to do anything. In a mainframe environment we have dumb terminals, in the new way of thinking (trying not to laugh, sorry) we have a PC that run’s a browser (this could be a very low spec machine), and if all we did is “cloud compute” we perhaps wouldn’t need anything else?

Sure I see benefits, some of which are green, but the negatives are so obvious to see. These are essentially the same problems we have with mainframes and the same problems that lead us to using the “PC” and the “Network” to replace mainframes?

Some thin client issues?

Let me give you an example. Imagine you and I are working as designers, creating 3D computer models of pretty much anything. We may even be responsible for animating these 3D models (think something like toy story, I don’t know why, it just popped in my head). Ok, now imagine you are part of a team of say 20 working on these models, of course you are designing Buzz, someone else Woody etc. Let’s think just how much “processing” power do we need for this – just you and your requirements? The answer, quite a bit, well a lot. Now image having to times that by 20. Oh, and now let’s have that processing carried out in a “thin cloud computing environment” (of course your application is written with the fab HTML 5 which means we can do anything), which at the end of the day needs a hell of a lot of work going on at the server, oh and traffic across our network… Do you see the problems?

Well basically, even with the advances of our hardware, the server will be doing too much and things won’t go well. The system will be going slow, maybe crashing, you as a designer will be going mad with frustration, along with the rest of your team, oh and not to mention you are working to a deadline so the project manager is now going mad. Let’s throw into the mix too, that our team is distributed across the States and the UK, and some of us love using Internet Explorer, some FireFox, some even Chrome…Hmm though in theory the web is great here, it is no match to a good old client desktop, some distributed servers…

Now I know I am focusing here on a situation that doesn’t lend itself to “cloud computing” or “thin clients” but if we believe all the hype of HTML 5, cloud computing why shouldn’t we be thinking this is possible? But, as our hardware advances so does our software (though at a slower rate granted) and we as users (be us general public users or business) expect more and more performance and capabilities. So while some of our user requirements do seem to lean us toward a cloud computing way of working, soon our requirements will no doubt swing back the other way (and wont we be repeating the Mainframe and PC story all over again?)

There is an answer

The answer is pretty simple to be honest and it is something Flash showed us the way to a number of years ago when it first started popping up on the web. The answer is a mixture of the two.

So let’s start evolving how we use the web properly (not just our habits) but how it is used. The web becomes a communications network and in some ways returns to its roots. We can still use it in the way we are used to, as in we find websites and we view them in a web browser, however, those websites that aren’t just presenting us with some information, or basic shopping facilities, websites that are more “applications”, get themselves installed on the client machine. So think MS Office on the web. Why install on the client? So that the user experience is not restricted by the web architecture, nor the browser, and that “processing loads” are removed from the server and distributed back down to the client PC.

Isn’t that what Flash was doing, installed and running on the client, err years ago? Yes, and that’s why Flash has worked so well to now…The problems with Flash are not what it visually looks like, nor its basic architecture (running on the client), the problems are that it doesn’t lend itself to being able to deliver “applications”. So it is great for the web to show animations, and funky banners, slick movies etc but don’t think it will be great at delivering that 3D modelling tool we spoke about earlier…

So let’s go back to our 3D modelling requirement in the designer’s studio. In our new web world we are now working with a RIA that actually runs on the client machine, uses local storage on the machine and uses the web only for bare communications and maybe storage of files that are to be shared. All of a sudden, all of the issues with “thin client” and “cloud computing” and server loads are removed, yet essentially we are still using the web and “cloud computing” to an extent…

So the answer is RIAs that use the client processing power and that do not run in the web browser.

Is this available…

Yes it is. Since Microsoft launched its Silverlight platform (which many see only as a competitor to Flash) it has been working towards this type of scenario, where we can maximise the benefits of the PC and the benefits of the web and cloud computing. Silverlight 3 was the first version to deliver an out of the browser experience and this has been taken further with Silverlight 4, with it being able to run as a trusted application on the client machine. Oh it also runs on Mac’s and PCs and if in the browser, any browser…

Silverlight, though in some ways similar to Flash and even the old Java Applets, is a new way of using the internet, rather than us re-inventing the same way of using the web with more bells and whistles. Like flash and Java applets, Silverlight essentially runs on the client PC. Which means we can utilise its processing power to do our work, it doesn’t need to go back to the server for updates to the UI, business rules or anything like that, and it can be done there on the client machine? However, it is connected and delivered essentially through the web as a communications network, so its data and files can be easily pulled and pushed across the web and stored there. Updates to the software are also delivered through the web, with the user being able to get the latest versions of the software just by using the software itself.

At present this is all still young, but the potential is there to change our web experiences and what we realistically should be using the web for. MS Office could be delivered as nothing but a Silverlight OOB (out of browser) application, allowing us to purchase it online and using it within moments. And it would look and feel just like the version we currently have from a CD (not the slightly less functional web version). Business applications could be delivered through organisations intranets, or their “cloud providers”. Websites that provide “secure” trade or partner areas would essentially have these installed on the client machine. Twitter, Facebook and other types of highly interactive websites would be delivered as RIAs installed on the machine (there is a prototype for Facebook already built and made, which you can download and use at http://www.silverlight.net/content/samples/apps/facebookclient/sfcquickinstall.aspx). You havent used the flexibility of the web at all, if you were on a new machine and wanted to get to facebook, still visit the website where you would get prompted to install the client, which would be a simply and quick install…and away you go, back on facebook.

The future then is…

Re-defining the web as a communications network and moving RIAs out of the web browser and down onto the client. By using the web in this fashion we get a truly distributed environment that has the benefits of the web, but also the benefits of the client machine…





Can Silverlight save Windows Mobile?

22 01 2010

I love the Silverlight technology and since I first started working with it (back in Alpha 1.1) I have always wanted to see it working on my mobile phone. Could Silverlight 4 be the release that finally makes it to our mobiles?

iPhone?

In a recent post I looked at Silverlight being used on the iPhone, allowing streaming of video content over the web to the iPhone. (http://andrewonedegree.wordpress.com/2009/11/27/silverlight-on-your-iphone-even/ ) I took this as a big positive move from Microsoft that they are seriously looking a Silverlight being used on the mobile phone. The iPhone is currently seen as the “bench” mark for smart phones (though a lot of other manufacturers offer far more features and functions in their phones) so for Silverlight to be a true mobile success, it will need to be able to work on the iPhone as well as Windows Mobile. We should also include other popular mobile platforms, such as Android etc.

Windows Mobile 7

For some time now, windows phones have been stuck using v6.5 which is based on the Windows CE 5.2 kernel, which was the same that powered Windows Mobile 5 (so that’s back to 2004). It has meant that while Windows Mobile phones still offer great office integration etc, they lack that certain wow factor, which is seen in abundance on the iPhone for example.

One of the reasons why the iPhone is so popular is the vast number of applications yo

Silverlight RIA

Silverlight delivers RIA and on your Mobile?

u can get for the phone, which is brought about simply by the vast number of developers out there for iPhone applications. So for Windows Mobile 7 to really compete, it too needs access the development community and have vast numbers of applications being developed for the platform…This is where Silverlight can step in…

Silverlight 4, the platform for building Windows Mobile 7 apps?

Looking through a recent ChannelWeb blog, it seems that a number of people have stepped forward and indicated that Windows Mobile 7 applications will be written and delivered via Silverlight 4. This makes great sense as it opens up the mobile platform to vast numbers of .NET and Silverlight developers. It also provides a real useable platform for business applications to work on multiple devices. (Something I have been looking into myself – http://andrewonedegree.wordpress.com/2010/01/07/ecm-access-on-my-phone/ ) My own company uses Silverlight to deliver a number of solutions, so having the capabilities to re-use our Silverlight and .NET code, to put-together mobile versions of these powerful business applications would be highly desirable…

It’s also worth pointing out that Silverlight development is moving along rapidly, with v3.0 being released back in July 2009 only for v 4.0beta to be launched just 4 months later in November. With this rapid development of the Silverlight platform, we are also starting to see more and more businesses and software providers switching and using Silverlight as their technology of choice.

The benefits go on, by using a “web” technology such as Silverlight, you don’t need to limit your applications to a particular native SDK, which is what you have to do for the iPhone or Android.

Conclusion

Is Silverlight the silver bullet that will make Windows Mobile 7 competitive and cool again? Well it will go a hell of a long way to helping, however the underlying operating system will have to run smoothly, be highly intuitive, support touch without the need of a stylus and above all, look great while still providing all that office integration we expect from a Windows platform. If Windows Mobile 7 can deliver this, then Silverlight will ensure a raft of developers get involved building cool applications, not just for the general public, but for businesses too…

Watch this space.





Intelligent BPM maps

14 01 2010

I have posted about BPM maps hindering flexibility and capabilities to some extent before (regarding systems integration). See:

http://andrewonedegree.wordpress.com/2009/11/30/bpm-mapping-tools-integrating-data/

http://andrewonedegree.wordpress.com/2010/01/08/incorporating-automation-into-your-processes/

However, in this post I want to take this further by looking at how BPM maps (I use the term map loosely here) can become intelligent and hold much more than just business process routing rules…

The role of the map

For many this is the “definition” if you like of a business process, shown in a graphical map format. This is great, and it’s true to some extent. However, I believe the primary role of a workflow system is to deliver systems integration, not a predefined diagram of a process.  BPM and workflow only works well when it brings together systems, people and data to maximise the efficiency of a business requirement (or process if you like).

So what is the role of the map? Well it is there to provide business rules for a cross section of applications to deliver a solution that allows users to do their work effectively. (Not easy to read that sentence). This work is shown as a process. For me, I prefer to see processes graphically, but not in my BPM system, well not used to define the rules etc within my BPM solution. Graphical representations are great for identifying the business requirement, and should be done by a BA. But process maps should be used as a “specification” if you like for a developer to build my intelligent process map…

Using a developer to implement my business rules

I know that many of us want to have a nice mapping tool that allows a business analyst (BA) to create and modify maps / business processes. However, in the real world, this means you have a couple of restrictions / issues.

  1. You can’t easily integrate with other LOBs and data required for a particular step
  2. You can be limited to other business rules / factors (that are outside the scope of your map)
  3. Automated steps often require “Robot” type step applications to be written (specifically for your requirement)
  4. Much more emphasis is placed on developers for the actual implementation / front end of much of the system (if you require intelligent integration / more complicated system integration)

As mapping tools get more powerful you still have these issues, mainly because a BA is just that, not a technical person who wants or should be bogged down in the technicalities / functions / calculations etc required for the business.

By using a developer to take your map and build business rules into a BPM system (if your BPM architecture allows this type of process definition), you open up a world of systems integration and flexibility. Effectively your business rules / map can now become intelligent.

Intelligent maps

An intelligent map is more than just business processing rules. It contains actual business processing logic, it has the capabilities to bring in data from third party software, carry out complex calculations and functions, raises events and triggers and does all of this within the map itself.

 Most BPM maps cannot provide this level of integration or capabilities to execute / carryout processing functions. Many times these types of functions are provided in the form of “Robot” applications or step processors. These are background applications or services written by developers to include business rules and functionality into the process map, because the map itself cannot support this level of intelligence. The outcome is a solution that requires much more processing power, requires greater input from developers and one that is harder and more costly to maintain.

By shifting emphasis of functions and rules to an intelligent map, you provide a BPM solution that delivers greater out of the box functionality, keeps initial costs far lower and requires less development work / bespoke step processors to be written. In addition, when your business needs to adapt and change, updating processes are far easier and quicker. Since the map itself contains the business rules of your processes (as well as the definition of that process), you need only modify one thing, your intelligent map. There are no background processors that need modification, no new application changes to be made etc. Because the business intelligence is all stored in a singular place…

Quick example…

A good example of a BPM platform that works in this way is workFile BPM. It has been architected to ensure the “map” holds all the business rules as well as having the capabilities to integrate with other LOBs and execute functions, triggers etc within the map. Developers have to build the map in this case, based on information provided by BA’s.  

The out of the box user interface is in most cases the only interface you need, simply because of the intelligence available at the map level. However, there will always be occasions when “bespoke” processors are required, and the workFile BPM platform provides a complete XML Web Service API in which developers can build on the intelligence provided in their maps within workFile BPM…

Conclusion: System integrator or process definer…

I see the main aim of BPM and workflow to raise the efficiency of businesses by making it easier for users, and the business, to complete work. Defining processes allows us to visualise this work, however, the BPM platform brings together everything that is required to complete the work. So a BPM platform should be a systems integrator first and foremost, this is the real beauty of BPM and workflow…





Incorporating automation into your processes

8 01 2010

This may seem quite simple, however, it is often something that is neglected somewhat. When designing your processes, ask yourself just what can be automated, and how much automation will add to the efficiency of that business process.

Identification

It can be tricky to identify everything that can be automated straight away. Only after a good analysis phase will the majority of processes / tasks that can be automated, be identified. Some of these will be obvious candidates for automation; typically these are calculations or actions that can be completed with all the information stored within the BPM system. Other automation candidates may not be so obvious. The less obvious processes / tasks are often overlooked because of the way the process is currently worked, typically requiring integration between systems (maybe even multiple LOB applications). It is always important to try and automate as much as possible, or at least indentify everything that in an ideal world, could be automated…

Restrictions

Once you have identified all your possible automated “steps”, you really need to see what you can realistically automate given your current BPM technology, LOB application integrations capabilities and of course, your budget…

One of the big problems with BPM modelling tools is that they can become very restrictive in what can be achieved with regards to integration. This is something I have blogged about in the past, http://andrewonedegree.wordpress.com/2009/11/30/bpm-mapping-tools-integrating-data/ with many automated “steps” you will require the services of a developer, hopefully your chosen BPM platform will support this kind of integration and processing…

The next hurdle is to identify what integration capabilities your other LOB applications provide. If you cannot integrate with them at all, then your step cannot be automated and will have to rely on some good old fashioned user processing power, not so efficient. Having a good IT department or use of a good IT consultancy typically means that your company will have a good and clear understanding of its IT and have some form of strategy / roadmap in place. If so, you will probably find that your LOB applications (unless very old) will provide some form of API allowing integration possibilities. (Ideally your business will have proffered technology platforms, such as .NET, Windows etc). If this is the case, then you can start to investigate just how much integration is possible and evaluate the costs involved in automating your process step…

The benefits

Automated process steps provide a number of benefits, the main two of which are:

  1. Efficiency
  2. Accuracy

There are obvious efficiency gains by automating a step, which obviously raises the efficiency of your process and improves SLAs etc. However, accuracy is often overlooked. Automated steps are far more accurate (once they have been fully tested) as they simply remove human error from that particular step. Now I am not saying this means your process will not have “issues”, but what I am saying is that an automated step removes user error from that particular part of your process, something that can be very time consuming.

With these two main benefits you also get a great “cost” benefit. If you measure your time and resources and place a monetary value on these, you will soon see a clear ROI timeline for automating a particular process step. This will typically be the deciding factor (if possible) in choosing to automate a particular step or not…

Conclusion

Whenever you review your business processes, even if you don’t have a BPM system in place, always ask yourself which processes or “steps / tasks” could be automated. Don’t feel restricted because a process spans multiple systems, departments or geographical spaces just indentify candidates for automation. A good way of doing this is by using a good independent consultant.

Automation is a great way of raising efficiency, accuracy, productivity and reducing operational costs. It there is always in the benefits of a company to automate as much as possible…








Follow

Get every new post delivered to your Inbox.

Join 864 other followers