FileNET Panagon Capture…How to…

25 02 2010

Ahhh now the inspiration behind today’s post is that I have noticed people finding my blog looking for the good old FileNET Panagon Capture objects – such as a RepServer, RepObject and how to unlock these components….

Now it has been a little while since I was programming in Panagon Capture, but this is the environment I first cut my teeth on when leaving uni. (Panagon Capture, is a document capture environment for the FileNet Image Services, Doc Management repositories). Panagon Capture has seen me working all over the UK, Ireland and places of Europe implementing capture solutions for FileNET implementations. From leaving uni, it was getting dropped in the deep end, but I have to say I enjoyed it – and it was how I made a name for myself at my first place of work…

Things to remember with the Capture object model

Ok well first things first, the Capture object model got slated in its early days, it was too confusing to pick up and many people struggled with it. However, I actually think it is quite elegant in places (sorry). So why did it get slated, well primarily because no matter what you are working with, you always have the same object – RepObject. So if I am working with a particular scanned page / image, I have a RepObject. If I am working with a document, it’s a RepObject, if a separator a RepObject, a batch, a RepObject …. So you can see it can get confusing…

In addition, it is also worth remembering that many of the features of Capture are ActiveX COM components (OCX controls). These are used to wrap up a bunch of functionality – typically the actual Scan process, Capture Path configuration, Document Processing options etc.

Capture out of the box

Now the Capture environment out of the box is ok, not great, ok. It can get confusing when trying to use it in a real production environment – I will explain why in a moment. Key things to remember here is to ensure Batches are the only objects you can see floating around from the root of the Capture environment. If you have images, or documents, then you are asking for trouble. In addition, separate all your capture paths into another folder (if you choose to use these – I recommend you don’t to be honest – well not in the way Capture encourages you too).

Always remember, that Capture out of the box is a good tool to monitor what is going on with your software if you are using the API to create your own FileNET capture applications. It does help, if only for logic checks.

The object model

In my early days working with Capture – it was hard to logically separate out functionality and implementations of classes etc. It was even harder to then put this in a way other developers could pick up quickly and easily. Because of this I decided to “wrap” up the Capture object model so that it logically made more sense to others in the company, and in addition to logically separate out functionality and instances of particular types of RepObjects (there is a nodeType property that helps identify the type of object you are working with e.g. Batch, Document). I strongly urge people to do this; it helps no end and makes developing your own Capture applications a lot easier. If you don’t have time to do this – or the in-house skills, perhaps look at purchasing a “toolkit” that an old FileNET VAR may have written. My old toolkit is probably still in circulation, but it is written in COM. If anyone wants it, I can put you in touch with the company that owns the IPR to it (an old employer).

By wrapping up the Capture object model into your own, it makes life a lot easier, especially for things like identifying types of objects, as your own object model should have objects such as “Batch”, “Document”, “Image”, “Server” etc. These objects can then logically contain relevant information and functions. A good example is status. Unfortunately you cannot unlock batches when they are being processed (unless you are an admin user). This means you need to check a status of a batch to see if it can be unlocked. Within your own object model this is easy and needs only be written and wrapped once (you see why life can get easier with your own object model).  This makes life a lot easier in a real world environment when your capture environment is a workflow in itself.

Separate out the capture environment

Many people here still use capture paths, I suggest you minimise their use as much as possible. These are fiddly and troublesome to say the least. First things first, scanning and document recognition, assembly etc should not be done on the same machine (though Capture suggests it should). Separate out the actual pure scan function from document processing activities – allow the scan station to only scan, nothing more. Remember scan stations are expensive and the big benefit of expensive scanners is throughput. You cannot afford to have the machine processing power being wasted on other tasks…

Document processing activities (such as splitting images into documents, batches, image enhancement etc) should all happen off of the scan station. So ensure you get a background service or application in place on a dedicated machine that does this job. It will be critical this process to the success of your implementation – so test, test, test, test and carry out some more testing.

Indexing is a critical part of capture. If you are slow here, you really have a negative impact on system performance. In addition, if you are sloppy and data is not correct, you will have a negative impact on the whole retrieval system and its capabilities to meet business requirements. Things to remember are that you may be working with different classes of documents. You may also need to pull in validation from external systems so Indexing applications can prove tricky. On top of this, you may well be releasing images into a workflow system – so data capture that is not going to be stored as index properties may also need to be captured….If you have your own object model, all of this becomes a hell of a lot easier….

A good tip – ensure your scanners always put only the same classification of documents in a batch. Sounds obvious but far too often this is overlooked. It is hard to change a documents class once it has been scanned, trust me….

Extend the object model

The Capture object model does allow for attributes to be placed on objects. This means you can extend your own object model with properties and store these as attributes onto a RepObject. I have seen others decide to implement their own database to do this, however that is just a massive overhead, and why, when you have all that you need in Capture. In addition, when testing it is so easy to look at RepObject attributes in Capture itself.

For particular requirements, extending the object model is a great way of attaching data that won’t be stored in the retrieval system, but may be required for other purposes (either to help index agents, or to trigger workflow systems, integration with other LOBs).

Another key area to extend the object model is that of locking. Basically, when an item is being worked on it is locked by Capture. However, you need to take control of this, as again it can get messy – with batches getting left at locked stats etc. In your object model I strongly suggest you explicitly call the locking of an object when you need to. In addition, you explicitly unlock it when finished with the object. Also, if you have a good “status” set up, this makes life easier when checking if you can or cannot work on an object. At the Indexing stage and document processing stage, this is crucial…

Success in a nutshell…

Wrap up the Capture API, extend the object model with your properties that utilise attributes, add your own functions to your logical components and explicitly take control of things such as locking. Once you have this type of API in place, splitting out scanning from document processing, from image enhancement is easy. It is also a lot easier to then implement good indexing applications (or one that can do everything) that promote quick working and integrate with validation components other LOBs. Releasing the captured images into the actual repository can also be separated, freeing up processing on the index station or from QA (if you have this in place).

If you do all of this, your Capture environment will be very successful and flexible enough to meet all your needs. If you at a later date want to plug in third party features, you can (such as ICR or something similar) . You can do this elegantly too, by storing the data from the third party component as further attributes on your object (probably a document). You can then pick these up at your indexing station or anywhere in the capture path and use them accordingly….

If you want help with Capture feel free to contact me directly. I still provide consultancy for this environment and am always happy to help…





Business application UI design

19 02 2010

Now this is my first post on this topic (I am sure many more will follow), and I want to talk about some fundamentals with user interfaces within business applications. More importantly, I want to distinguish between UI that looks great in a demonstration, and UI that is great to use…Trust me, they are not the same thing…

Traditional business UI and poor design – it’s not good

When presenting your business application, the first thing it is judged on is the way it looks. It’s a simple fact, if it looks awful then people just aren’t going to love your application even if it’s by far and away the best thing out there in terms of functionality. It’s also worth remembering, that a poor UI will slow down users actually using the system, which is never good – it leads to lost time, poor efficiency, poor views on the system and ultimately frustration.

Typically, business applications UI are functional, and nothing more. All the fields the user needs to access are shown (hopefully in logical places) and the screen is often that lovely grey colour. There is nothing “flash” about traditional business application UI, however, functionality is no excuse for not delivering a great end user experience. This is something that is becoming increasingly more important with business applications – and one of the reasons is probably because of the wide spread use of websites that look great…

UI that presents well, but is it great to use

With WPF and “richer” UI environments (we can include Silverlight in here), UI design for business applications can really add value to the user experience. Screens can become more user friendly, intuitive to use and give the user greater feedback and guidance. However, you can take a good thing too far – and this is something that I have started to see quite a bit of…

Because as designers and developers we have the tools to create something that really has “wow” factor, should we? There is a time and a place – and it is key here to remember what the actual use of the system is, how often users will be using the system (or just screens), how experienced users are and how quickly they can complete their tasks. Let’s look at a real basic example:

Let’s look at searching for a customer’s record. Now I have seen some great and out of the box ways in doing this. Some include browsing and dragging cabinets and records around to allow us to navigate our way to the record / search areas of a system. Others use carousels that act as a “wizard”, with each selection bringing a new set of carousel options (identifying customer type, then account type etc before providing a search screen)…These demonstrate great, they will knock the socks off of the directors and you are a winner….You have put together something different, something that’s intuitive, and something that looks great. Well done…

However, lets now look at this in the real world…I have a user who wants to search for a customer quickly; they may even have the customer on the end of the phone. So is dragging objects around to build a search, or navigate to a search a great idea? Or will they find this restrictive, slow and ultimately frustrating?

Just because we have the tools to build “wow” UI and animations – we shouldn’t feel we have to use them and businesses shouldn’t expect them to be included either…

UI that demonstrates well, but is great to use…

This is where we need to get to. UI that is operationally great – it allows users to work quickly and efficiently and the bells and whistles that are available with WPF, Silverlight etc used to aide in the user experience and bring real value to screens.

Let’s look at our customer record search. We could have a UI that contains a shortcut key to a search panel, which could be slid onto our screen from within any other screen / module. The user can enter some quick key information and be presented in a new tab with search results…The user has in a couple of key strokes called up a search, and found the customer. They can now get on with servicing the customer’s request and then back to whatever task they were working on before. Now this doesn’t look half as flash as our earlier UI, however, it looks good and the use of the “bells and whistles” has added to the system functionality wise – as well as to the user experience.

 

Conclusion…

We have to remember how the user works when designing screens – something that can be lost when going through requirements and what could look / demonstrate great. If your user needs to work quickly, or spends a lot of time in certain screens, then you still can’t beat short-cut keys and use of a keyboard. Touch can help but ultimately, navigating through great graphical based interfaces can be slow and frustrating…Flash new developer and design tools are there to help, and should be used when needed, not for the sake of using them…

For business applications the design rules should still be, “what works quickly for the user” and then, “how can we make that look and feel better”…





iPhone still leading the way?

12 02 2010

A lot of friends of mine quickly jumped on board the iPhone when it was first released, only for all of them to quickly jump ship complaining of the lack of features, slow internet connection etc etc. Don’t get me wrong, they all said they loved the look, the feel, and how the iPhone works, it was just that the phone did a lot less than other phones…It was truly, look and feel over substance (something I often get to witness in IT a lot)…

However, even with the early iPhone there were the usual die hard apple fans that claimed it revolutionised the mobile world, claimed this and that, but they did have a point. The iPhone set the new standard for how a phone should look and feel and how we should interact with it…Nothing else….

So is the iPhone still the market leader here?

Function, function, function…

We have got used to phones doing more and more, which can now be said of the iPhone too. Originally it gave us less, but now the iPhone has a proper internet connection, can allow you to actually forward a text message, and has a half decent camera…So function wise it’s now up there…Or is it…If you’re a business user you more than likely have been advised not to go near an iPhone..Why is that? Well like other Apple devices think connectivity and synchronisation. Other phones are a lot better at it, especially from a business point of view. For starters, I would suggest pretty much any Blackberry phone is going to suite your needs more. In addition, though not overly popular with bloggers and phone reviewers, Windows Mobile 6.5 operated phones are also more in line with business user needs and there is still a wealth of them out there…

Look, feel and touch?

Ok we all want to go to touch don’t we? Well maybe not all, for some of us it is quite impractical, I know many business users that will choose a qwerty keypad over a qwerty touch, and that’s because with touch it is easier to hit the wrong key (I’m sure backspace is a very popular key – well it is with me on my own touch phone).  But touch phones do have a lot going for them, the size of the screen, the ability to interact with various media in a friendly fashion, and also a nice big screen to allow us to use mobile applications (think Google Maps, Live etc).

So is the iPhone still ahead in this area…Well until recently I would have said Yes for sure. The iPhone touch screen is impressive, and the interface for writing messages also pretty good. My own Samsung Omnia (which I have had a long time now) is ok, but the touch on the iPhone just feels and works better it seems. However, HTC HD Touch 2 has that same feel about its touch screen. It also boasts a big 4.3”, all in all, I think I prefer it…..

Applications and extras…

Ok here the iPhone is still miles ahead, nothing can really touch it. The iPhone app store is a pretty impressive place, however, again if you are a business user there isn’t that much there for you. Friends of mine who own the new iPhone 3GS have lots of apps on it, however, I don’t think one is of any use other than amusement – don’t get me wrong there is nothing wrong with that, I am just not a big games user on my phone…Also, I know there are some good iPhone based applications that are of use, look at the NatWest banking app for example…

So why is the iPhone ahead here, well it’s because of the open SDK that allows developers to make iPhone based applications. That is how iPhone got ahead here, until recently, it wasn’t that easy to dev mobile apps. However, other phones have caught up here, again let’s look at the HTC Touch 2. Windows Mobile 6.5 can support applications, however many of us just don’t go and get them. A popular app though is Facebook…One of the big things is using facebook on your phone, twitter and YouTube. On many phones this still isn’t a great experience, but with Windows 6.5, Android and the touch phones that use these systems (lets keep on track with the HTC HD Touch 2) the experience is just as good as the iPhone (if not a little better)…

So will things change…Well I can’t see Windows based phones, or Android based phones catching the iPhone any time soon with regards to applications. I believe in terms of functions and features, they are probably ahead now (well a few of the phones). But things may well change with Windows Mobile 7 – which could be out at the end of this year…Please note could be out…If Windows Mobile 7 supports Silverlight 4, then a very large community of developers instantly get access to delivering mobile phone applications. This means that potentially, Windows based mobile phones will almost overnight be able to provide an app store that can compete with the iPhone…Now that would be interesting…





Should ROI hold much weight?

9 02 2010

When working as a consultant I often get asked about ROI, and how best to calculate this. Now when working for previous companies, this had to be done (especially when in a pre-sales phase) and I can see clearly why. Like anything in business, if you can show something is worth doing from a money point of view, then it is likely to get done.  

But how much weight does ROI calculations actually hold? I have read a couple of blog posts about this in the past couple of days, some of which see ROI as a complete joke and pull no punches in saying so. Have a quick read of this article submitted by Alan Peiz-Sharpe (@CMSWatch on twitter) http://www.cmswatch.com/Trends/1798-ROI-Joke

Working as an analyst myself, and also as a technical guy, I can see Alan’s point, ROI calculations are very vague and often based on presumptions. But does this mean they are no good at all? I beg to differ.

Business Case                                                                       

Ok, I am not saying use a ROI in the traditional term. I see ROI as an illustrative tool when looking at smaller parts of a business case. So when investing in IT, make sure you draw up a good and detailed business case for the solution you are purchasing (lets use ECM here as this is more of a specialist field of my own). Building a business case is not easy, especially for ECM and don’t think that a couple of hours on line reading up on ECM is going to help you write a good business case. This is an area that I often help businesses with, and it is one where businesses should really look to outside help if they can.

Ok, so you have a good case for your business, one which looks at the business benefits of the system and technology available, one that looks at what is the best fit for your organisations (don’t get hung up on price at this stage).

Where to use ROI type calculations

I never use ROI calculations as an argument for anything, especially for something as large as a complete solution for an organisation. (Many of the problems with these are illustrated in Alan’s post – though typically these are user case presumptions, over optimistic calculations etc). So where do I use these types of ROI illustrations…Well I use them based on “cases”. 

So let’s look at what I mean by a case. Ok, let’s say you have all your content stored in paper, and unfortunately your storage area goes up in a nasty fire. What is the estimated cost to your business of this? Don’t try to actually put a value on this, but imagine if you had a good ECM system you know you don’t have this as a problem for you. So in this case, your real ROI would be whatever you would saved in this scenario, which is more than measurable money….Don’t like that one?

Ok, what about the cost in fines to your organisation if you are found to be “non-compliant” to government legislation – with a particular solution you would be compliant. Is the fine greater than the investment? Yes – so the system is therefore a good ROI in this case scenario.

You can keep on doing this, looking at smaller business scenarios within your business case for a particular IT solution (doesn’t have to be ECM) and carry out ROI type of illustrations. You can of course be tempted to actually place monetary values in your ROI scenario illustrations, but please, if you do this, be very cautious and make sure you get your “variables” as accurate as possible. Let’s look at a quick example…

An example is monitoring how long it takes an individual to locate a paper file. Now obviously this is going to be different each time that person searches for a file – sometimes it will be on their desk, or the desk adjacent to them, it may be filled correctly, or it may be miss-filed or worse still, missing. So, take an average of that person’s time spent looking for files over a couple of days. Then spend the same period of time monitoring a person using an ECM system to locate files. What’s the time saving? I would use Time as my ROI in this case and let others put a price on this. Why? Well though you can put a price on this time saving, does it actually equate to that money saving? You still pay that person the same wage do you not? So the only way to calculate a real saving is looking at efficiency gains in this case, and that can be tricky. Though in theory it is easy, what actually happens when that staff member has more time to do their work? Do they actually work harder and faster? Or do they only marginally increase the amount of work they complete – actually, do they get given anymore work because they are working more efficiently or do they still only receive the same amount of work to do… You can quickly see how ROI as a money calculation can come back to bite you later…

Conclusion…

Basing any investment, especially on IT based on an ROI calculation is asking for trouble. Instead look at a valid business case that may contain scenarios which will illustrate areas where investment return can be measured, not necessarily in hard cash. Within my blog you will find a number of posts on ECM savings, some of which do look at scenarios and cases where monetary values could be added, however, my aim is to make illustrations and put forward a business case, rather than a simply ROI calculation…





Redefine the way we use the web, to unlock its potential…Web 3.0?

6 02 2010

This is something I have been thinking about for a number of years now, but more so recently with a lot of talk of HTML 5. Basically we haven’t really changed the way we use the internet (from a technical point of view) since the web became mainstream shall we say. Sure, we now use it in new ways which we hadn’t dreamed of (habits and the way we communicate with each other), but essentially the web still works the same way it always has. We use the web as content rendered as HTML that is displayed back to us in a web browser. Even if HTML 5 is the magic version and delivers so much more in terms of animation and streaming has it actually changed the way in which we use / the web works for us? No…

Let’s not go back to the good old Mainframe environment…

It seems more and more IT professionals and large organisations see the web as the new mainframe, especially when you start talking “thin client” and “cloud computing” (the cloud could be seen as our mainframe..scary). When you start looking at mainframe environments and then cloud and thin client computing, you see that the basic concepts are very similar. So what do I mean, well, all of the processing happens on a server, the machine you actually use to access it, doesn’t really have to do anything. In a mainframe environment we have dumb terminals, in the new way of thinking (trying not to laugh, sorry) we have a PC that run’s a browser (this could be a very low spec machine), and if all we did is “cloud compute” we perhaps wouldn’t need anything else?

Sure I see benefits, some of which are green, but the negatives are so obvious to see. These are essentially the same problems we have with mainframes and the same problems that lead us to using the “PC” and the “Network” to replace mainframes?

Some thin client issues?

Let me give you an example. Imagine you and I are working as designers, creating 3D computer models of pretty much anything. We may even be responsible for animating these 3D models (think something like toy story, I don’t know why, it just popped in my head). Ok, now imagine you are part of a team of say 20 working on these models, of course you are designing Buzz, someone else Woody etc. Let’s think just how much “processing” power do we need for this – just you and your requirements? The answer, quite a bit, well a lot. Now image having to times that by 20. Oh, and now let’s have that processing carried out in a “thin cloud computing environment” (of course your application is written with the fab HTML 5 which means we can do anything), which at the end of the day needs a hell of a lot of work going on at the server, oh and traffic across our network… Do you see the problems?

Well basically, even with the advances of our hardware, the server will be doing too much and things won’t go well. The system will be going slow, maybe crashing, you as a designer will be going mad with frustration, along with the rest of your team, oh and not to mention you are working to a deadline so the project manager is now going mad. Let’s throw into the mix too, that our team is distributed across the States and the UK, and some of us love using Internet Explorer, some FireFox, some even Chrome…Hmm though in theory the web is great here, it is no match to a good old client desktop, some distributed servers…

Now I know I am focusing here on a situation that doesn’t lend itself to “cloud computing” or “thin clients” but if we believe all the hype of HTML 5, cloud computing why shouldn’t we be thinking this is possible? But, as our hardware advances so does our software (though at a slower rate granted) and we as users (be us general public users or business) expect more and more performance and capabilities. So while some of our user requirements do seem to lean us toward a cloud computing way of working, soon our requirements will no doubt swing back the other way (and wont we be repeating the Mainframe and PC story all over again?)

There is an answer

The answer is pretty simple to be honest and it is something Flash showed us the way to a number of years ago when it first started popping up on the web. The answer is a mixture of the two.

So let’s start evolving how we use the web properly (not just our habits) but how it is used. The web becomes a communications network and in some ways returns to its roots. We can still use it in the way we are used to, as in we find websites and we view them in a web browser, however, those websites that aren’t just presenting us with some information, or basic shopping facilities, websites that are more “applications”, get themselves installed on the client machine. So think MS Office on the web. Why install on the client? So that the user experience is not restricted by the web architecture, nor the browser, and that “processing loads” are removed from the server and distributed back down to the client PC.

Isn’t that what Flash was doing, installed and running on the client, err years ago? Yes, and that’s why Flash has worked so well to now…The problems with Flash are not what it visually looks like, nor its basic architecture (running on the client), the problems are that it doesn’t lend itself to being able to deliver “applications”. So it is great for the web to show animations, and funky banners, slick movies etc but don’t think it will be great at delivering that 3D modelling tool we spoke about earlier…

So let’s go back to our 3D modelling requirement in the designer’s studio. In our new web world we are now working with a RIA that actually runs on the client machine, uses local storage on the machine and uses the web only for bare communications and maybe storage of files that are to be shared. All of a sudden, all of the issues with “thin client” and “cloud computing” and server loads are removed, yet essentially we are still using the web and “cloud computing” to an extent…

So the answer is RIAs that use the client processing power and that do not run in the web browser.

Is this available…

Yes it is. Since Microsoft launched its Silverlight platform (which many see only as a competitor to Flash) it has been working towards this type of scenario, where we can maximise the benefits of the PC and the benefits of the web and cloud computing. Silverlight 3 was the first version to deliver an out of the browser experience and this has been taken further with Silverlight 4, with it being able to run as a trusted application on the client machine. Oh it also runs on Mac’s and PCs and if in the browser, any browser…

Silverlight, though in some ways similar to Flash and even the old Java Applets, is a new way of using the internet, rather than us re-inventing the same way of using the web with more bells and whistles. Like flash and Java applets, Silverlight essentially runs on the client PC. Which means we can utilise its processing power to do our work, it doesn’t need to go back to the server for updates to the UI, business rules or anything like that, and it can be done there on the client machine? However, it is connected and delivered essentially through the web as a communications network, so its data and files can be easily pulled and pushed across the web and stored there. Updates to the software are also delivered through the web, with the user being able to get the latest versions of the software just by using the software itself.

At present this is all still young, but the potential is there to change our web experiences and what we realistically should be using the web for. MS Office could be delivered as nothing but a Silverlight OOB (out of browser) application, allowing us to purchase it online and using it within moments. And it would look and feel just like the version we currently have from a CD (not the slightly less functional web version). Business applications could be delivered through organisations intranets, or their “cloud providers”. Websites that provide “secure” trade or partner areas would essentially have these installed on the client machine. Twitter, Facebook and other types of highly interactive websites would be delivered as RIAs installed on the machine (there is a prototype for Facebook already built and made, which you can download and use at http://www.silverlight.net/content/samples/apps/facebookclient/sfcquickinstall.aspx). You havent used the flexibility of the web at all, if you were on a new machine and wanted to get to facebook, still visit the website where you would get prompted to install the client, which would be a simply and quick install…and away you go, back on facebook.

The future then is…

Re-defining the web as a communications network and moving RIAs out of the web browser and down onto the client. By using the web in this fashion we get a truly distributed environment that has the benefits of the web, but also the benefits of the client machine…