I don’t use Oracle, but this is a well-written post on performance. Bon Apetit!
Every summer, I like to have a learning goal. I gain — hopefully — more knowledge, and, besides I can write one of those corny posts about what I did over the summer. Two years ago, that goal was learning Clojure, and I’m still learning it, but around that time, I also wanted to explore so-called NoSQL databases, MongoDB in particular.
Well, this summer, despite a large project looming, I would like to take up learning MongoDB. I learned Clojure on a more accelerated level by finding a project where Clojure could safely be introduced, and, if a bailout was needed, could be re-implemented in a language like Python or Perl. I have to find a way to introduce MongoDB the same way.
I have been using SQL databases for years, and believe my understanding is reasonable on how and where a SQL database would be introduced. When switching to a NoSQL database, where is the starting point? I have lots of data, but how would the data be introduced?
So, then, the journey begins with how. Only time will tell.
Fatal error in at line 300 in module “module name here (actual 4gl file name here)” | | -1829: Cannot open file citoxmsg. pam. This is most often caused by incorrect environment settings. Basically, I was running a program in one directory with the directory set for another.
Error -1829 (Cannot open file citoxmsg.pam.) is most often caused by incorrect env. settings. I suggest checking env. variables set for the application., especially INFORMIXDIR, DBLANG and PATH. You can also check the variable INFORMIXDIR in setnet32
As it turns out, I don’t believe this is due to environment. It came about after I replaced inserting into a table by an external program with an in-program solution.
As it turns out, this error occurs because while reading a table with a cursor, the last record is continually re-read without hitting a “notfound” or end of records condition. So, it is still a strange error, but I know what causes it.
Earlier this year all our tax billing programs had to be updated to print a new kind of bar code. The US Post Office notified us with plenty of lead time that the older Postnet bar code would not longer be accepted as of May 2011. Instead a new “intelligent” bar code would be used. That bar code contained many more characters, some constant, like our town’s ID, and some varying, like a unique billing serial number, not related to the bill’s account number.
Our billing software uses the Informix SE database, which accepts transactions. I thought it won’t hurt to add one read and write to a completely different table and still within the same transaction protecting reads and writes to tables that store bills and balances. I was mistaken.
A billing run that normally takes less than ten minutes increased about eighteen times.
The fix was to invent a new scheme where starting serial numbers and number of bills to be printed are stored, so possibly overlapping bill runs would be guaranteed unique serial numbers on each bill across all bills. This is the Post Office requirement for the serial number portion of the intelligent bar code.
All the serial numbers are recorded at the end of the bill run, by using database tools to load the table, instead of SQL. The run time returned within two minutes of the original.
So, no matter what you think is going to happen with a database, something else can and will take its place. It is good to test and gather performance metrics.