SAGE Line 100 ODBC driver (hmmm)

16 12 2009

OK I am not a SAGE professional, nor am I a SAGE developer, however, I would like to be able to access correct documentation online and actually speak to others / find out if something is correct…

The requirement

The requirement is simple, connect to a SAGE line 100 system via ODBC (DSN), then run a simple query on the data (stock information – read only of course) and then do something with that information (in my case call a secure web service and update it with the stock information). Simple, half a days task you would think…

It is a tad more complicated than that, as the requirement is for this to run as a windows service, scheduled in to run a number of times during the day. This in itself isn’t a problem, however checking up on error messages can be an issue – as you are reliant on interacting with the Event log (again nothing wrong with this. However you will see why this added to my problems)

The SAGE ODBC problems…

Ahh here we go. Well first off the SAGE ODBC drivers aren’t great (well not for SAGE 100). This caused no end of issues with just trying to connect to the database. There isn’t that much help online and to top it off, many discussion groups etc post questions but no one any real answers. My first problem was that the System DSN that had been set up didn’t appear to work. Using .NET 3.5 to connect to this just wasnt working. My event log was stating a number of issues, which when following the calling stack highlighted an “unsupported driver request”. With this in mind, I then adopted a DSN less connection approach, using a connection string.

My next problem isn’t really to do with SAGE that much, more lack of communications. You see the SAGE line 100 connection string isn’t your typical connection string, rather it needs pointers to particular files. I unfortunately had been given some slightly misleading information on the location of these files (though this may be more of a break down in communication than anything else).

At last I got connected. However, my next problem was that I couldn’t set the SQL command, nor more importantly execute the SQL. Again the SAGE driver was very very particular about its SQL and format, to the point that my SQL went through numerous iterations when to be frank, it was fine in the first place.

The big problem…ODBC read / fill

Everything was now finally moving along or so I thought, but then more problems. This time an unhandled error being raised in my code with the event log only giving me the following message “Event log messages can only be 32766 characters long”. Now thats quite an error message that wasnt getting displayed. I am very particular about my error handling in code so was surprised to see that I had an untrapped error. Anyway, I have spent many hours adding event log debug code after event log debug code and added error messages only to find none of them were getting called…

This meant I would start to post information messages to the event log stating where I was in the code (not great as this service will run forever and a day dealing with 1000s of items a number of times during a day). As I followed my event log I could see that my code was executing and behaving as expected almost all of the way through, then out of the blue an unhandled error. Refering back to my code to the location, it all seemed handled correctly. I have looked at this so hard that I even started to think maybe a timing issue was causing the problem…

However, no…The error is simple. All of my code works fine, but the SAGE line 100 drivers don’t. Why you ask. Well simple, I have read out all of the records placed them in my dataview and returned this to the calling function, all fine, but then there it is my un-handled error….Its great really, basically in my finally statement, all my ODBC objects are disposed and set to nothing. Alas, without any additional error handling around them. It appears that the ODBC reader once disconnected and disposed from SAGE (.dispose)  triggers an error message for every item that it has read, hence my event log unable to display me my message (maybe if it could, I would have had this solved earlier – but that’s not my point). This then caused a further error which meant my lovely windows service stopped working…

Something to remember

If you are using a SAGE Line 100 ODBC driver with .NET basically you need to trap every call and handle everything. This includes simple dispose and setting objects to nothing. The ODBC driver is ODBC 2 compliant so only use ODBC 2 compliant calls (this may mean some of the features you want to use from ODBC within .NET are unavailable). However, sticking to these and things should work fine(ish).  You can use the standard ODBC.open and then ODBCReader.read methods, but remember to trap all your code especially when destroying connections and objects….

Advertisements




Centralise Document Capture

11 12 2009

For quite some time I have been a strong advocate for larger organisations taking control, and responsibility, for their own scanning processes. I have nothing against outsourced scanning organisations, it’s just that organisations are entrusting what could be their most sensitive data to a third party, and not only that, they are relying on them to deliver it back to you as good accurate images and more often than not along with key associated data.

 I now hear cries of “what’s wrong with that?” Well a number of things actually…

  1. Just who are the people carrying out the scanning? Who has access to these files
  2. What skills do they have in identifying key parts of a document?
  3. Compliance issues / complications
  4. Quality control
  5. Speed

Let’s look at these one at a time.

So who is actually doing the scanning and indexing tasks? Well in-house you have control over this, basically you choose who to employ. However, when outsourced you have no idea who has access to these files, sometimes you don’t even know what information could be found in these files (if sent directly to an outsourced document capture organisation), let alone then what sensitive information is being read by who.

Let’s be honest, being a document scanner is not the most thrilling of jobs, so outsourcing companies will often employ “lower skilled staff” (please don’t take that the wrong way) and staff working on a project per project of very temporary basis.  This brings me on to point 2…

What skills do your outsourcing company staff deliver? Have they any experience of scanning or indexing and if so, do they understand your business and what content to expect / look for in scanning documents?

Compliance is a big thing here and even I sometimes get a little lost with it in regards to outsourcing. For many markets, compliance means you have to know where all your data and content is stored at any point. Now if you are using an outsourcing company, does this mean you need to know what machines that content is being stored on? Where those machines are? With regards to cloud computing this is a big problem as organisations simply don’t know exactly what server is holding what information of theirs…so does the same apply when outsourcing your document capture. Worth taking some time to think about that one….

Quality control is a big bear of mine. In IT circles remember “shi* in, equals shi* out” and that’s so true with document capture. If your image quality is poor, or the accuracy of its accompanying data, then when trying to locate that content, you will find it rather hard, and your great document retrieval / ECM system will be almost pointless…

Ahhh, speed. This is often, along with cost, the big factor for organisations choosing to outsource document capture, but is it any quicker? In my experience the answer is no. I have worked on numerous projects which have used outsourcing companies for their document capture, only to find it has taken an unexpectedly long time to get the images into the retrieval system (based on the data received / postal date of content for example).

So get centralised

It’s cost effective for larger organisations to get their own centralised scanning environment. Not only will the business process of capturing this content be smoother, but also the quality of your images and accompanying data will be better. With greater investment in scanning software and the automation of data capture (OCR / ICR, Forms recognition, Auto-indexing etc) organisations will find it easier than ever before to reap the rewards and enjoy a quick ROI.

There is already currently a trend back towards centralised scanning. A recent AIIM industry watch article highlights this. Have a read here; http://www.aiim.org/research/document-scanning-and-capture.aspx, then ensure you take ownership of your own document capture requirements…

For a good place to start when thinking about document capture and scannign solutions, read one of my earlier posts on Document Capture success….

https://andrewonedegree.wordpress.com/2009/05/14/successful-document-capture/





Bing Maps getting Silverlight

4 12 2009

Well it had to be just a matter of time before Bing Maps started using Silverlight to deliver the richest mapping experience on the web. Since the start of November I have been playing around with the Silverlight Bing Maps control which far out-performs the AJAX control and for me the HTML version on the actual Bing website.

The payoff

Microsoft's Bing

Microsoft's Bing

Well for the end user, the Silverlight experience is far smoother and allows you greater control. For example zooming into an area on the map using the wheel of your mouse is a nice touch, but the app renders smoothly. In addition you can mesh together traditional map views with aerial photos. There are also nice features such as the street side walker – which currently isn’t available in that many areas, but it allowed me to jump into the map and then walk around parts of the world I have never seen (well visited now….).

With the Silverlight version, everything just feels so much more professional, it’s a real jump forward in terms of the functionality that is capable and the experience it provides the end user. Why not see for yourself, you will need to have Silverlight installed. http://www.bing.com/maps/explore/ NB If you don’t have Silverlight installed I suggest you get it asap.

Microsoft has also followed on a trend of offering “app stores”. A new application gallery will be available allowing developers to include their own information on a map.

The biggest pay off though is the capabilities this provides to other developers and websites that want to use mapping technology. I have already seen a number of demonstrations showing how you can overlay / highlight “areas” within a map. One great demonstration shows the New York marathon route, it not only shows you this route and the “area” covered on the Bing map, but also shows runners moving along the route – comparing their relative times etc….Not bad….

The .NET framework…

I have to say that I like the way Microsoft is going, building everything on the .NET framework or a subset of it. It allows more powerful applications to be built and integrated with each other. This is another great example – Silverlight, which is a subset of .NET with a WPF subset as a presentation layer, combined with the Microsoft Live web services (again delivered in .NET) delivering a feature rich experience for users. More importantly though, working in this way provides the development community with the tools they require to take things further.

By combining the Silverlight Bing Map control with the Microsoft Live web services, it is now a quick and rather simple(ish) task for any .NET developer to deliver powerful mapping / mapped based services to clients that look and perform great.