workFile Vision. A change in direction

12 11 2010

Today’s post is very much centred on Business Process Management (BPM), Enterprise Content Management (ECM), Customer Relationship Management (CRM)…

 Some of you may keep an eye on the news from my company, One Degree Consulting. If you have, you will know that our workFile ECM & BPM side of the business (platform) will be going through a transition phase in the coming weeks and months. We have effectively torn up our existing road map for version 2.0 of the workFile Vision product, and put together a new one. This new one with some big, well massive, changes to how we see the future of IT in business, the future for business solutions, the future for SMEs access to solutions and consequently to the Vision solution itself…

In the coming weeks, workFile and One Degree will publish more information on the changes, and the effects these will have on the Vision suite, and how these big changes will provide benefits to business.

In this post though, I want to give a quick outline to what some of these changes in thinking are, what the changes are in the Vision product, and what the drivers are that led to this drastic new thinking…

Single Silo…That singular degree of separation

workFile is, if you didn’t know, an ECM and BPM platform. However, it also allows records management and with that, the ability for CRM to an extent. Other business focused modules are built on top of the records management capabilities. However, all of these are very separate modules and silos, only aware of small fragments of data that can be shared between the two, effectively linking that content and making it of bigger use to an end user…

So what’s the big idea? Well the big change is to move away from a multiple silo approach, and to bring these different elements closely together, effectively delivering a single silo solution for ECM, BPM, CRM, Records Management, and dynamic content processing and capture. The CRM module will be a thing of the past, and a dedicated customer focused section of workFile built (not on top of Records management functionality not seen as a separate module).

In essence, ECM, BPM, CRM etc will become modules of the past, superseded by a new way of looking at how we work as individuals, teams and as an organisation, and also how your organisation communicates and engages with its customers…All of these elements seen as one…

So how do we achieve this with the new version of workFile Vision?

Through state awareness, user empowerment and adaption. The concept here is to ensure true state awareness between the user, the customer, the content and the process. BY process, I don’t mean a rigid path, which work must follow, rather a process guide, which is highly adaptive to the content needs, the needs of the customer and the needs of the user.

In addition, the singular UI and underlying capabilities of workFile – to allow real team working on items of work, makes life a lot easier for the agent to collaborate and process their work. This may not sound like anything that new, but it supports newer ways of working. We have a vision that people will work more as teams on individual pieces of work, effectively pulling together on items of work, not in a collaborative fashion but in a real sense of working together. This is a big move away from BPM and Case Management as it is today, with the concept that we work as individuals and move work along at the centre of work / process thinking.

Max J Pucher has a great article on the future of work, in which he talks of users “swarming” to do work. In it he also states that by 2015, 40% or more of an organisations work will be non-routine, which is currently at 25%.  Take the time to read his blog, it is very informative… Have a read of his article, http://isismjpucher.wordpress.com/2010/11/12/the-future-of-work/ )

More than a single silo…

A single silo that supports content, customers, additional records and the process information is the best approach. In addition, interconnectivity and multiple feeds of data will mean not only will users need greater perceptive skills, but their software needs to be able to deliver this to them in an easy to identify and work fashion.

workFile though provides real flexibility in terms of content, status and structured data. This allows the flexibility to teams to create new structured data records on the “fly” and in essence joining them directly to their work (which could be content based, customer based etc.) This may all sound complex, but essentially it is quite simple…Its how we would naturally work without the rigidity of structured processing…(BPM).

Distribution…

Though we are moving to a single silo, this doesn’t mean a centralised solution. On the contrary, we believe that departmental distribution is key to freedom and success. So workFile will support a greater level of distributed processing, with departments being able to create their own content guides, their own process guides, rules etc. But, this doesn’t mean we are allowing duplication. Commonality between departments will be identified and illustrated, and wherever applicable (and suitable) shared between them.

It’s a team approach

Working in “swarms” sounds quite fun, but in essence it means tightly knit teams, working together quickly and efficiently. Traditional BPM presumes we work on pieces of work as individuals, then move it along to the next person. Sure occasionally we will allow “branches” in the processing, or splitting of items of work, but it doesn’t support multiple people working on the same piece of work at the same time. So, with this in mind, Vision 2.0 will support a more team approach to working, and will ditch the rigidity of its traditional BPM platform, which was used for defining how users work.

Social Media

While social media is taking off, organisations either see this as some wonderful marketing tool or as something they need to get control of. However, social activities and social media sites, conversations etc are becoming increasingly part of a team’s working day. These conversations and interactions aren’t carried out at a set time, they aren’t structured in their content and don’t form strong ties between you as an organisation and your customers. In addition, they are often disjointed, with an organisation not being able to tie social media engagement with a customer, to a customer record for example.

So the trick is to ensure interactions can be processed by the right people, that the right people provide good information, and that Social Media is seen as a form of engagement and conversation, not just free marketing. In addition, the content generated from these interactions allow a flexible way of working, after all, the customer may send requests that don’t follow a strict pattern, and as such, the user must be able to facilitate these requests flexibly. This content should also be recorded and brought into the solution, so that other team members have all the information they need to help….

workFile will become a lot more social, interacting with typical social media websites, and allowing users the freedom to interact in an expected fashion.

Flexibility, adaption and yet accountable

Organisations and management want to have full control, however, if they do, things become too rigid, too centralised and ultimately inflexible. So, the solution is to trust our workers, to empower them and let them do their jobs. Sure we need to ensure quality, service level agreements etc. but this can be done through guidelines and empowering users. Accountability will always still be there, with solutions recording all interactions and use. But the point is, the user has the power to process the work how they wish (to an extent obviously, certain rules have to be in place for compliance).

The big winners of Vision 2.0

So who is workFile Vision to be aimed at? Well the big winners at first will be SMEs, simply because workFile is used mainly by organisations that fall into the SME category (with the odd exception). The new version will be able to drive the cost of IT and these types of solutions down for SMEs…

However, larger organisations can easily benefit from this new way of thinking and working. If anything, while SMEs will see benefits due to a smaller investment, larger organisations will not only share in this benefit, but will also see dramatic increases in productivity and efficiency. All of this with the reduction in administration and licensing costs…..See, we didn’t call it Vision for nothing.

Finally, a change in name…

Finally, the workFile ECM & BPM platform name will be no more. Though Vision is the product suite, both the terms ECM and BPM will be replaced from the workFile company name. Why? Simply because workFile will offer a lot more, and it deserves a new description of what it delivers…The marketing people can think of something I am sure….





Can we help business users engage more with ECM

23 04 2010

I have posted a number of times about the benefits of ECM solutions and what a positive impact they can have on any business, be it small or global. However, ECM is still a hard sell, and for many, even once they have a good system in place, they don’t really get the end user engagement that is required to make ECM really work well for an organisation.

So why is this? Why is implementing an ECM solution so hard to ensure real user engagement? What are the problems?

The easy part

When we talk about ECM and even demonstrate it, the first thing or the easiest thing to show is the retrieval of content. This is always easy for business and end users to grasp. “So you’re looking for a particular file, well, do this, this and this and hey, there you are, there is the file you want to work with…” This is great, and in essence, is the heart of ECM. However, retrieval is always the easy part. The problem is ensuring that the content we are looking for is actually in the repository….

Habits

Content that should be in a repository is everywhere; it can be in the form of a business contract document you are drafting, or in the form of an email etc. Now for the actual user who is working with this content, ask yourself, what do they do with it? I think most of the time you will find that, if a file, it is more than likely to be sitting in old reliable “My Documents”, wait, maybe even a “My Documents” on a server in some cases. However, its name is more than likely to be something meaningful just to that user, oh, and that is the only distinguishing part of the content…..So what of our eMail content in this example. Well, if you are a small business and are using POP3 mail then it’s just on their machine now. If you have a mail server (such as exchange) then it’s sitting on or in that mail server.

So, when using my ECM solution, I can’t actually find that content I require, because it simply isn’t in the repository. This means no matter how good your ECM system is, it is pointless because it isn’t holding the content you require….

Increasing scope and engagement

The only way to get all content into your ECM repository is to make “capture” processes easy. I am not going to talk about scanning of physical paper here (see other posts I have made on this), but capture of content that is already in digital format. This has to be as simple as possible, and include easy access from a multitude of other applications.

By making this easy, and more important, almost part of their current working habits, then any ECM platform will perform and give back more to an organisation, simply because it will hold more of the relevant content within it. This is the key to a good ECM platform, and getting all those efficiency and productivity gains ECM promises to deliver.

Becoming adaptive

I have spoken a little about being flexible and adaptive; more so with regards to BPM, but the same arguments are valid here for ECM. Typically capture processes and the way in which users are expected to work with ECM is very rigid. This needs to become more fluid and adaptive to their needs and requirements. How many times do we see a user wanting to engage and add content to a repository, only to find that, well it is hard to assign properties and values to a piece of content because it doesn’t fit within the designed and rigid system parameters. Let’s become flexible and allow the user to update these parameters so that the content can be stored correctly and accurately. This is to the benefit of everyone involved.

In addition, as an organisation, you need to ensure you chose an ECM platform that can adapt to your requirements. A key part of ECM is application integration, and it is no good utilising a platform that you cannot integrate easily with other business applications, or more to the point, with business application you are yet to purchase….

 

Quick conclusion…

If ECM can fit into end users habits, almost seamlessly, then engagement of users is going to be far easier and greater. If we take this further, and provide ECM solutions that are more adaptive, more flexible and more readily and easily available to users, then ECM will become the cornerstone of any business, as it should be… It is thinking like this that has made me push for our own ECM platform and is why my company is working hard to get the new workFile ECM Vision platform ready. ECM has so much potential, the key is unlocking it for users – which ultimately benefits business…





FileNET Panagon Capture…How to…

25 02 2010

Ahhh now the inspiration behind today’s post is that I have noticed people finding my blog looking for the good old FileNET Panagon Capture objects – such as a RepServer, RepObject and how to unlock these components….

Now it has been a little while since I was programming in Panagon Capture, but this is the environment I first cut my teeth on when leaving uni. (Panagon Capture, is a document capture environment for the FileNet Image Services, Doc Management repositories). Panagon Capture has seen me working all over the UK, Ireland and places of Europe implementing capture solutions for FileNET implementations. From leaving uni, it was getting dropped in the deep end, but I have to say I enjoyed it – and it was how I made a name for myself at my first place of work…

Things to remember with the Capture object model

Ok well first things first, the Capture object model got slated in its early days, it was too confusing to pick up and many people struggled with it. However, I actually think it is quite elegant in places (sorry). So why did it get slated, well primarily because no matter what you are working with, you always have the same object – RepObject. So if I am working with a particular scanned page / image, I have a RepObject. If I am working with a document, it’s a RepObject, if a separator a RepObject, a batch, a RepObject …. So you can see it can get confusing…

In addition, it is also worth remembering that many of the features of Capture are ActiveX COM components (OCX controls). These are used to wrap up a bunch of functionality – typically the actual Scan process, Capture Path configuration, Document Processing options etc.

Capture out of the box

Now the Capture environment out of the box is ok, not great, ok. It can get confusing when trying to use it in a real production environment – I will explain why in a moment. Key things to remember here is to ensure Batches are the only objects you can see floating around from the root of the Capture environment. If you have images, or documents, then you are asking for trouble. In addition, separate all your capture paths into another folder (if you choose to use these – I recommend you don’t to be honest – well not in the way Capture encourages you too).

Always remember, that Capture out of the box is a good tool to monitor what is going on with your software if you are using the API to create your own FileNET capture applications. It does help, if only for logic checks.

The object model

In my early days working with Capture – it was hard to logically separate out functionality and implementations of classes etc. It was even harder to then put this in a way other developers could pick up quickly and easily. Because of this I decided to “wrap” up the Capture object model so that it logically made more sense to others in the company, and in addition to logically separate out functionality and instances of particular types of RepObjects (there is a nodeType property that helps identify the type of object you are working with e.g. Batch, Document). I strongly urge people to do this; it helps no end and makes developing your own Capture applications a lot easier. If you don’t have time to do this – or the in-house skills, perhaps look at purchasing a “toolkit” that an old FileNET VAR may have written. My old toolkit is probably still in circulation, but it is written in COM. If anyone wants it, I can put you in touch with the company that owns the IPR to it (an old employer).

By wrapping up the Capture object model into your own, it makes life a lot easier, especially for things like identifying types of objects, as your own object model should have objects such as “Batch”, “Document”, “Image”, “Server” etc. These objects can then logically contain relevant information and functions. A good example is status. Unfortunately you cannot unlock batches when they are being processed (unless you are an admin user). This means you need to check a status of a batch to see if it can be unlocked. Within your own object model this is easy and needs only be written and wrapped once (you see why life can get easier with your own object model).  This makes life a lot easier in a real world environment when your capture environment is a workflow in itself.

Separate out the capture environment

Many people here still use capture paths, I suggest you minimise their use as much as possible. These are fiddly and troublesome to say the least. First things first, scanning and document recognition, assembly etc should not be done on the same machine (though Capture suggests it should). Separate out the actual pure scan function from document processing activities – allow the scan station to only scan, nothing more. Remember scan stations are expensive and the big benefit of expensive scanners is throughput. You cannot afford to have the machine processing power being wasted on other tasks…

Document processing activities (such as splitting images into documents, batches, image enhancement etc) should all happen off of the scan station. So ensure you get a background service or application in place on a dedicated machine that does this job. It will be critical this process to the success of your implementation – so test, test, test, test and carry out some more testing.

Indexing is a critical part of capture. If you are slow here, you really have a negative impact on system performance. In addition, if you are sloppy and data is not correct, you will have a negative impact on the whole retrieval system and its capabilities to meet business requirements. Things to remember are that you may be working with different classes of documents. You may also need to pull in validation from external systems so Indexing applications can prove tricky. On top of this, you may well be releasing images into a workflow system – so data capture that is not going to be stored as index properties may also need to be captured….If you have your own object model, all of this becomes a hell of a lot easier….

A good tip – ensure your scanners always put only the same classification of documents in a batch. Sounds obvious but far too often this is overlooked. It is hard to change a documents class once it has been scanned, trust me….

Extend the object model

The Capture object model does allow for attributes to be placed on objects. This means you can extend your own object model with properties and store these as attributes onto a RepObject. I have seen others decide to implement their own database to do this, however that is just a massive overhead, and why, when you have all that you need in Capture. In addition, when testing it is so easy to look at RepObject attributes in Capture itself.

For particular requirements, extending the object model is a great way of attaching data that won’t be stored in the retrieval system, but may be required for other purposes (either to help index agents, or to trigger workflow systems, integration with other LOBs).

Another key area to extend the object model is that of locking. Basically, when an item is being worked on it is locked by Capture. However, you need to take control of this, as again it can get messy – with batches getting left at locked stats etc. In your object model I strongly suggest you explicitly call the locking of an object when you need to. In addition, you explicitly unlock it when finished with the object. Also, if you have a good “status” set up, this makes life easier when checking if you can or cannot work on an object. At the Indexing stage and document processing stage, this is crucial…

Success in a nutshell…

Wrap up the Capture API, extend the object model with your properties that utilise attributes, add your own functions to your logical components and explicitly take control of things such as locking. Once you have this type of API in place, splitting out scanning from document processing, from image enhancement is easy. It is also a lot easier to then implement good indexing applications (or one that can do everything) that promote quick working and integrate with validation components other LOBs. Releasing the captured images into the actual repository can also be separated, freeing up processing on the index station or from QA (if you have this in place).

If you do all of this, your Capture environment will be very successful and flexible enough to meet all your needs. If you at a later date want to plug in third party features, you can (such as ICR or something similar) . You can do this elegantly too, by storing the data from the third party component as further attributes on your object (probably a document). You can then pick these up at your indexing station or anywhere in the capture path and use them accordingly….

If you want help with Capture feel free to contact me directly. I still provide consultancy for this environment and am always happy to help…





Centralise Document Capture

11 12 2009

For quite some time I have been a strong advocate for larger organisations taking control, and responsibility, for their own scanning processes. I have nothing against outsourced scanning organisations, it’s just that organisations are entrusting what could be their most sensitive data to a third party, and not only that, they are relying on them to deliver it back to you as good accurate images and more often than not along with key associated data.

 I now hear cries of “what’s wrong with that?” Well a number of things actually…

  1. Just who are the people carrying out the scanning? Who has access to these files
  2. What skills do they have in identifying key parts of a document?
  3. Compliance issues / complications
  4. Quality control
  5. Speed

Let’s look at these one at a time.

So who is actually doing the scanning and indexing tasks? Well in-house you have control over this, basically you choose who to employ. However, when outsourced you have no idea who has access to these files, sometimes you don’t even know what information could be found in these files (if sent directly to an outsourced document capture organisation), let alone then what sensitive information is being read by who.

Let’s be honest, being a document scanner is not the most thrilling of jobs, so outsourcing companies will often employ “lower skilled staff” (please don’t take that the wrong way) and staff working on a project per project of very temporary basis.  This brings me on to point 2…

What skills do your outsourcing company staff deliver? Have they any experience of scanning or indexing and if so, do they understand your business and what content to expect / look for in scanning documents?

Compliance is a big thing here and even I sometimes get a little lost with it in regards to outsourcing. For many markets, compliance means you have to know where all your data and content is stored at any point. Now if you are using an outsourcing company, does this mean you need to know what machines that content is being stored on? Where those machines are? With regards to cloud computing this is a big problem as organisations simply don’t know exactly what server is holding what information of theirs…so does the same apply when outsourcing your document capture. Worth taking some time to think about that one….

Quality control is a big bear of mine. In IT circles remember “shi* in, equals shi* out” and that’s so true with document capture. If your image quality is poor, or the accuracy of its accompanying data, then when trying to locate that content, you will find it rather hard, and your great document retrieval / ECM system will be almost pointless…

Ahhh, speed. This is often, along with cost, the big factor for organisations choosing to outsource document capture, but is it any quicker? In my experience the answer is no. I have worked on numerous projects which have used outsourcing companies for their document capture, only to find it has taken an unexpectedly long time to get the images into the retrieval system (based on the data received / postal date of content for example).

So get centralised

It’s cost effective for larger organisations to get their own centralised scanning environment. Not only will the business process of capturing this content be smoother, but also the quality of your images and accompanying data will be better. With greater investment in scanning software and the automation of data capture (OCR / ICR, Forms recognition, Auto-indexing etc) organisations will find it easier than ever before to reap the rewards and enjoy a quick ROI.

There is already currently a trend back towards centralised scanning. A recent AIIM industry watch article highlights this. Have a read here; http://www.aiim.org/research/document-scanning-and-capture.aspx, then ensure you take ownership of your own document capture requirements…

For a good place to start when thinking about document capture and scannign solutions, read one of my earlier posts on Document Capture success….

https://andrewonedegree.wordpress.com/2009/05/14/successful-document-capture/





Document and file retrieval metadata

28 08 2009

Far too much focus is made today on providing complex retrieval fields within ECM solutions, and far too much is made of them from customers. For sure, inherited values and properties can be of great use, but when you start to look at your actual requirements, far too often retrieval fields are simply made too complex.

Points to remember

When designing your retrieval fields, metadata or indexes (whatever you wish to call them), keep in mind just what a user will want / need to do to actually locate this file / document. Here is a quick list to help you:

  1. How much information will the user have on a file?
  2. How much time do you want to allow them to enter search information
  3. How can your metadata fields actually assist in this
  4. What sort of results will be brought back and how clear will these be to the user (clear as in how can they quickly see the file they want)

Many systems recently spend a lot of time on very accurately identifying files, however, by doing this they also make it very complex at the data capture stage (scanning and indexing) and also require the user to spend longer setting up their search.

Keep it simple

When designing / identifying metadata fields for files, always try to make and keep things as simple as possible.

First things first, identify the types of files you are storing. This doesn’t mean pdf, word, tiff etc. rather it relates to their type within your business. So some examples may include personnel files, expense claim forms, insurance claim form, phone bill, customer details etc. (dependent on your business).

Once you have made this identification, we get onto the point of retention. How long will a particular file type stay “live”, then move to an “archive” then be completely deleted. When doing this you may find that you logically have some separation of files appearing. NB only create a new classification of file type if it is needed. Don’t do it as some logical separation, rather classifications should only be created to separate either groups of metadata or address such issues as migration and retention periods.

The tricky part is to now identify the metadata fields associated with your types of files. I would always suggest you try to keep these as simple as possible and try not to use more than 7 fields to identify a file. This is where often designers get carried away using inherited fields from different objects within the repository. This is all well and good and can really help in displaying search results back to users (or a heirachyy of files back to a user). However what I try to do is the following:

  1. Imagine you don’t know if there are other files out there in the system (nothing to inherit from)
  2. Identify at least one key field (policy number, customer  number, telephone number etc)
  3. Provide a list of options to the type of file it is (Date of birth certificate, driving license, claim form, phone contract, interview, recorded conversation etc)
  4. Only provide other fields that help logically identify this file from other files of the same type, or they help identify, for example, a customer entity within your business
  5. Provide as many “drop down list” options as possible. This ensures data is accurate and not reliant on spelling or interpretation
  6. Identify any metadata that may be “shared” with other file types. For example a Policy Number may be found on multiple types of files within multiple classifications of files. In addition Policy Number is unique within the business so therefore it can be used to tie together a number of files to a particular policy holder.

If you stick to these 5 principles you will find that 9 times out of 10 you will not have any call for using complex inheritance or complex storage concepts. You more than likely have also identified your classifications in full. Please note that your file types along with classification will also 9 times out of 10 provide you with enough criteria to accurately assign security information to these files.

Once you have identified how information is to be retrieved, think about what information could be automatically captured at the data capture side of things. This sometimes illustrates fields that could be used to help identify files at retrieval; it also sometimes identifies fields that really aren’t appropriate.

Showing results

Often your retrieval system will display results of searches in a format which isn’t always that great to you or your business needs. This is why there are so many “professional services” offered to customers of such systems. As a customer, linking objects together, even showing them in a “tree view” type fashion can help the end user. However, this isn’t a call for inherited properties, rather a call to logically display business related information.

Also remember different types of searches can require different ways of displaying search results. This is sometimes overlooked by designers and system providers to the detriment of the user experience.

Finally, always think past the retrieval process. Once a user has found the file they want they will need to interact with it in some way, this could be to simply view its content or to pass on to another user etc.

Conclusion

I am a firm believer in keeping things as simple as possible and often adopt that IT term the “80 – 20” rule. Far too often IT tries to deliver too much, and in doing so it over complicates areas of the system or worryingly the business. When this happens more often than not a project can be seen as a failure, when really, by delivering less the customer gets more.

When putting together metadata for the retrieval of files remember to try and keep things as simple as possible. Identify key fields and not get carried away in capturing too much retrieval data. Also, always keep your end user in mind, so that’s the end user at the scanning and index stage and end users searching for files. Sticking to these simple rules will ensure you deliver a file retrieval system that works efficiently, quickly and well for your end users and your business…





Successful document capture…

14 05 2009

Well this is something close to my heart. My first ever project after leaving university was to help write a document capture application that was built on-top of the FileNET Panagon Capture platform. Ahh happy days…Though I did seem to earn the name “scan man” from then on, which wasn’t so great, as every document capture project our company then had, I had to be involved with….

Ok so how do you implement a successful document scanning / capture solution. Well it’s very simple, follow these 5 guidelines and you are well on the way.

  1. Throughput is everything. Make sure people can load the scanner and let it do its thing. You don’t want to be stopping to separate documents or batches. Make sure your software can do this and purchase a scanner with a big document holder.
  2. Ensure you maximise the quality of the images you are capturing. If this could be a problem, then make sure you get in place good quality control and re-scan technology
  3. Identify as much information as possible up-front with your software. The more a user has to do, the slower and more expensive the process will become
  4. Ensure your data captured or assigned to a document is accurate. Remember your retrieval of these images depends on the accuracy of your data capture
  5.  Your document capture is pointless, unless you release the images into your storage repository with all the correct information. Again make sure this is done seamlessly and accurately. The longer the files are in your capture process, the longer it will take for them to turn up in a customer file for example…

 

So where to start?

Well this is with your document capture software, and there are lots of solutions out there. Firstly, when choosing your capture software, have those 5 guidelines in your mind. You want to automate as much as possible (unless we are talking only the odd scanned document through the day). In addition, you don’t just want to watch a sales pitch on the actual scanning process, or the physical scanner being used. You want, and need, to see the process all the way through, and with a variety of documents.

It’s best if you can use forms wherever possible, but you will always have un-structured documents coming to you, such as letters. Now you MUST see a demonstration of how these are dealt with, then ask yourself;

“is that efficient?”

“how could that be speeded up?”

“am I happy with the way data is entered / captured?”

“now let’s find the document in the retrieval system”

I don’t want to start recommending software, as depending on your storage repository etc you may find you have a limited selection. What I will say, is that for our workFile ECM repository we use software that I have been familiar with and more than happy with for sometime, Kofax. I have worked on numerous projects with Kofax Accent Capture and with Nuerascript recognition modules (which are now part of Kofax). Kofax provides you with all the technology and features you could want to streamline any capture environment. And, more importantly, they allow you to write your own release processes into the repositories of your choice.

What about architecture

Scanning can be quite intensive for your PC. A while back, all of your “steps” if you like were carried out on a single machine, so you scanned, had the batches and documents recognised, processed, enhanced then sent on for an agent to index. However, this isn’t great, ideally you want to split out this intense processing work and let your scan station simply scan images.

Server based solutions are best, freeing up staff to scan and pull documents as and when they are ready. Your images should always be ready quicker than your staff can quality assess them or carry out indexing tasks. Oh, don’t be fooled by “thin” document capture, something has to drive the scanner and therefore it’s not “thin client”…

What about staff?

This can be a boring task, so rotate your staff to different jobs, every couple of hours. They may still get bored, but if you don’t do this, they will be making lots of errors and getting really bored. Trust me, just spend a couple of hours doing one task such as scanning and your brain can go numb…

You will also need a “champion” of the capture process. Someone who can keep people motivated and ensure they maximise the potential of the system. All too often the system capacity is not met as staff becoming lazy or complacent. This negates your investment and diminishes your return on your investment, so a champion is very important.

It’s also worth noting that from time to time, you will need someone with more experience of the scanning process, again that champion, simply because you will get issues with stuck paper, batches not getting recognised, image quality problems etc. At this point, you need someone with a little more knowledge of how things work.

 

Finally

Remember no matter how good your capture process is, your retrieval system is only as good as the quality of the images and the data associated to those images. Also, please don’t invest heavily in a great capture system then scrimp on your retrieval system. If you do this, you will find no benefit of the capture process and document imaging at all. Your first port of call is still ensuring you purchase the right retrieval / document management system. Then address the capture side of things.