The Hume Integration Java Datahub

This document is best viewed using the frame view which provides a hyperlinked table of contents in the frame immediately to the left of the larger frame where this document is displayed.


This is Licensed and Supported Software.
©Copyright 2010, Hume Integration Software
All Rights Reserved
This document may not be reproduced or redistributed without prior written permission of Hume Integration Software. Licensed users of HIS provided software have permission to reproduce or electronically distribute this document to support their usage of the HIS software.

Introduction


The Java Datahub is a database server, a data subscription server, a message system server, and an extensible software component that can be embedded in a custom application or executed directly.  The synergistic combination of these features is an ideal enabling platform for automation applications.   The Java Datahub has particular strengths for distributed automation applications that involve more than one application process, programming language, operating system platform, or software developer.

The Java Datahub server provides familiar database table usage either from Java code, or by using standard SQL (Structured Query Language).  Database tables are managed in-memory in order to provide high performance.  By limiting the database features to a carefully chosen subset of SQL, the Java Datahub requires only a small resource footprint.  When the application data is managed in Datahub tables, you immediately have a graphical user interface (GUI) to view and edit your data, either using the GUI of the server application or by using Datahub client applications running on other computers in your network.    When developing a complex distributed application, having the GUI buys time - you can use the Datahub GUI and defer creating application specific dialogs until after your core logic is operational. 

Table data is more dynamic and versatile than static property files that are commonly used by Java applications.  You only need to create the configuration data in the data server process, and use table subscription logic to notify client processes when configuration data has changed.  This compares to coding a one-time reading of properties during initialization for every process and restarting each application process to use revised properties.  This is the smart way to communicate setup and process changes to the controllers and other computers in your system.  

Table data can be loaded or saved to the file system as SQL statements.  Table data can also be output in HTML format, or as CSV (Comma-Separated-Values) files.  We provide example Java applications which demonstrate using subscriptions that perform incremental table saving to the file system in order to have persistent table data even if the application ends abruptly.  One demonstration uses only Datahub features, and another uses Sqlite3 for a persistent store.  The Datahub integrates easily with persistent databases and it is common to use SQL messages as subscription feeds to a persistent database server or to load the startup configuration data for a Datahub application from a persistent database.   However, the Datahub is not trying to compete with Oracle.  The Datahub's role is to provide light-weight integration components that are imbedded directly in your applications to manage data and status notifications with consistency, robustness, and far less application work.

It is typical to mix platforms such as Windows and Linux, and other programming languages such as C# .NET, and POSIX C/C++ in a distributed automation system.  The Datahub functions as a DMH message system server and these client platforms are easily bridged using various DMH client libraries.   The DMH does not require the tedious pre-processing and compilation of IDL and the recompilation of your whole application because some method call was changed.  Also, unlike when using CORBA or similar RPC systems, you do not need to develop elaborate error trapping schemes in order to prevent crashing when clients have not connected or when they disconnect. 

Some other primitive message passing systems require writing parsing code or the tedious specification of the format of text commands that are exchanged in messages.  The Datahub completely avoids these problems and the time spent in group design discussions is minimal.   The developers only need to understand the structure of data tables for the application.  Then each developer can code his piece of the application independently based on the table schema.   For example, a developer writing the interface to some barcode readers may choose to send SQL insert or update statements from a client process.  Another developer who is working on the supervisory logic in Java can create a table subscription to the barcode data table in order to be notified whenever there is a new barcode reading.  In the subscription logic, his class method is passed exactly the data items he has subscribed to, and he codes his logic without any message parsing.  He is able to test his software by manually inserting or updating barcode table data using the Datahub GUI. 

A table subscription can send SQL notification messages in order to replicate selected table data to other application processes in a distributed system.  Once again, there is no parsing code written by the developer.  The ability to convert SQL to table data is built-in to any of the Hume Datahub versions and additional Datahub instances can be connected to a DMH server as DMH message system clients.  Subscriptions are a form of coupling that can compare with method invocations except there is far more flexibility.   A subscription is notified only for data changes meeting a selection criteria (eg., "controllerID='processModuleA') and the subscriber can also choose which column values of the table are included in the notifications.   When creating a subscription, there is a choice of whether the notifications directly send a message to a DMH connected client, or whether a Java method is executed in the Datahub application process.   The latter choice of executing a Java method provides flexibility to work with any other system because subscription data is received as native Java data items, and can be repackaged and processed to meet the needs of the data consumer.

The Java Datahub is a third generation design from Hume Integration Software and the sophistication of the user interface, the richness of the application features, and the availability of the DMH client libraries are the hallmarks of an evolved system.  The core SQL logic is similar to our Tcl/Tk Datahub where the data management is done in C code.    We have been able to provide the Java developer with high-level, type safe classes and methods in combination with the power that SQL has to manipulate thousands of table rows with a single command.

Feature Highlights

Example Applications



User Guide

This document complements the detailed javadoc reference information on the Hume website.  We expect that a Java developer will download and install the javadocs to have detailed API information presented in context during code development.

Installation

A Java developer should be using an IDE such as Eclipse or NetBeans.  Eclipse may be downloaded from http://eclipse.org and it is our recommendation for new users.   Make sure you are using a 1.6 or newer version of the JDK with your Java IDE.   Spend some time learning how to use your IDE if you are a new user. 

You cannot use a Java runtime environment (JRE) for Java Datahub application development, you must  install a JDK which includes the tools.jar java compiler library in the lib subdirectory.  This is because the Datahub compiles java code files that are generated from SQL create statements.  The best way to add the tools.jar file, is to add it to the default installed JRE for Eclipse instead of needing to add it to every Datahub application project.  Click the Window Menu of Eclipse and then click on the Preferences menu item.  In the Preferences dialog, expand the Java tree branch, and click on the Installed JREs tree branch.  On the right side of the dialog you see a table of the installed JREs.  Choose your installed JDK by selecting it with the mouse and press the Edit... button on the right.  If the lib/tools.jar file is not shown in the JRE Definition dialog, JRE system libraries list, then use the Add External Jars... button to add it.  The tools.jar file is directly under the base of the JDK install, for example, "C:\Program Files\Java\jdk1.6.0_20\lib\tools.jar".

To get started with the Java Datahub, download three archive files from the Hume download site:
  1. DatahubApp.zip
  2. humeDocs.zip
  3. html85.zip
The DatahubApp.zip contains the Datahub.jar file and Java source code demonstration applications in the subdirectory datahubApp

The humeDocs.zip contains the javadoc files for the Datahub classes.  This is an essential download for Java developers.  You need to associate the documentation in this archive with the Datahub.jar in order to have in-context API prompting while using Eclipse or similar IDE's.  The javadocs material contains API reference documentation without the introduction and guidance found in this document.  The content is browseable on the Hume website at http://www.hume.com/java.

The html85.zip archive contains the Hume Datahub SDK documentation which is on the Hume website at http://www.hume.come/html85/ and includes this document.  If you are downloading and installing a Datahub SDK version, you can skip downloading this archive because it is contained in the various SDK downloads. 

Extract the contents of the DatahubApp.zip archive to a directory of your choice, such as "c:\java\hume\datahub". 

The humeDocs.zip archive does not need to be unpacked to be used by Eclipse.  However, you probably want to extract the files in humeDocs.zip to make it convenient to browse them when not running Eclipse.  The humeDocs.zip archive contains some of the same directory paths used by the Java source code.   We recommend you keep the documentation and code separate by extracting to a different directory such as "c:\java\hume\doc".   This enables you to share the same javadoc install with a Hume java SECS software installation too. 

If you are not installing a Datahub SDK, extract the contents of the html85.zip archive to a directory of your choice such as "c:\java\hume\doc".  The zip file contains the subdirectory folder html85 so the files can be extracted to the same base directory as the javadocs in humeDocs.zip. 

After extracting the documentation files, create Bookmarks in your web browser to the javadocs index.html file, and similarly to the SDK documentation file html85\index.html

The directory you chose to extract the DatahubApp.zip files becomes the base of your java CLASSPATH when executing the demonstration applications.  There is a .classpath file included in the .zip file which is installed in the directory base to help Eclipse and similar IDE programs add the Datahub.jar file to the classpath. 

Demonstration Applications

Here is a summary of how to create an Eclipse project for the Datahub demonstration applications.  
  1. Use the menu item File/New/Java Project
  2. Fill in the Project Name - for example,  DatahubDemo
  3. In the Contents frame,
    1. Choose the radio button "Create project from existing source"
    2. Fill in the Directory field by browsing to the directory you extracted the java files such as "c:\java\hume\datahub"
  4. Press the "Next >" button at the bottom to move to the next dialog.
  5. See the tabbed notebook panels at the top.  Click on Libraries.
  6. The first item in the Libraries is the Datahub.jar.  Click on the (+) sign to expand the item.
  7. Click on the "Javadoc location: (None)" line to select it and press the Edit... button on the right.
  8. Associate either the humeDocs.zip extracted files or the .zip file.  In either case press the Validate... button and then the OK button when done.
    1. For extracted files, use the first radio button choice "Javadoc URL" and Browse... to the directory where the javadocs were extracted such as "c:\java\hume\doc"
    2. For the .zip archive, use the second radio button choice "Javadoc in archive", choose "External file" and browse to the humeDocs.zip file.  For "Path within archive" enter "/".
  9. Press the Finish button (or the OK button if you are revisiting the Project/Configure Build Path dialog).
Now you are ready to browse the demonstration Java code.  In the left Package Explorer pane, expand the package datahubApp and double-click on DemoJavaUse.java file to load it into the editor. 

Here is how you can verify that you have properly installed and created an Eclipse project.   If you are having errors along the lines of "com.sun.tools cannot be resolved" please go back to the second paragraph of this section and follow the directions to add the lib/tools.jar file to the JDK used by your project.   In the open file DemoJavaUse.java, type control L (ctrl-L) and enter line number 148 to go to line 148 of the file.    A source line near that location is similar to "HubTable abc = Datahub.tableGet("abc");".   Move your mouse over the tableGet method call and you should see API help appear.  If you hover over HubTable word, you should see the in context help for the HubTable class. 

You should be able to run or debug the DemoJavaUse.java file as a Java application since it contains a static main() method.  Notable files in the datahubApp package include:
DemoJavaUse.java
A general demonstration of creating and using Datahub tables.  Run this progam to have the Datahub execute as a DMH server with some example tables and data.  Explore the GUI features which are discussed in more detail below.
MakeTableFiles.java
This class has a main() method to generate the TableRow*.java classes from SQL create table statements, and also to generate example methods for your application.  You may want to edit the directory value that the logic is specifying as a destination directory for created java files.  You may want to copy and modify this file for your own applications.
PersistenceDemo.java
This class demonstrates a technique to have table data saved incrementally on the file system and then be restored at application startup.  We use similar code for our sales order processing application.  The class has a main method for you to run.
SqliteGateway.java
Here we have a demonstration of connecting to the persistent database program Sqlite3 run as a child process of a Java application.  Tables can be loaded into the Datahub from Sqlite3 at startup, and incremental changes are sent to the Sqlite3 program for saving as the Datahub runs.  Depending on your perspective, the Datahub provides a GUI/Subscription server/client-server front-end to an Sqlite persistent database, or Sqlite provides reliable, peristent storage for the Datahub.  The Datahub can function as a filter for an Sqlite database - only important data needs to be forwarded to the Sqlite database for saving.
HubClient.java
This class also has a main method to run.  Execute this application while you are also running the DemoJavaUse application.  The HubClient application demonstrates connecting to the server and using a table subscription to make an automatically refreshed clone of a table at the server.


Usage Concepts

A central concept of the Datahub is managing application data using tables.   Tables provide an organized, generic approach to managing data, so you wind-up with a less complex application that does not have a mess of special purpose classes just to work with various data items.  You also immediately benefit with having features to display, edit, save, export, and restore complex data without having to develop any code to deal with each of your special purpose classes.  So this is the basic value proposition - you can buy a more advanced starting point for your application and not re-invent generic data management features.  

The subscription features of the Datahub provide the ability for application logic to be notified when table data that matches a selection criteria is changed.  Subscriptions can provide efficient event propagation to update GUI screens that display status information such as equipment alarms, production statistics, or data graphs.  Subscriptions can also send messages to client applications that are running on different controller systems, so they are also useful to propagate dynamic setup and configuration changes, or to feed-forward process data throughout a distributed control system.

The Java Datahub can be run without any Java code customization but it is more common for the Java developer to embed its features in a custom application.  The Java programming API of the Datahub provides for direct, type-safe access to table data and this can be simpler than using SQL statements, especially when working with one table row at a time.  However, the expressive power of SQL lets you manipulate thousands of table rows in one statement.  So most applications will use a mix of Java methods and SQL statements for data manipulation depending on the context.

In typical use, a Datahub is also participating in a DMH message system group.  In this usage, the Datahub receives DMH messages sent to its command mailbox, and executes the messages as SQL statements. Usually these command messages are standard SQL statements such as the SQL insert statement or the SQL select statement.  Data that is held in tables is accessible by any client application that connects to the Datahub using the DMH message system.  For convenience to its message system clients, the Datahub accepts and processes certain statements that are not standard SQL, which allow the clients to access custom commands that the Java developer creates to extend the Datahub command set - see the eval and the lval command descriptions.

The Graphical User Interface (GUI) of the Datahub provides features to display and manipulate table data.  The GUI also has features to help the developer such as the display of example programming statements.  There is also a console window for the input of SQL commands, and a status display for the DMH message system.  Various Java classes used by the GUI are available for use in your own applications.  The Java Datahub can be used with or without the GUI being shown.

 

Database Management

The Datahub uses a subset of ANSI standard SQL for database management. The basic SQL commands are supported for operations on one table per SQL statement. Table joining, grant and revoke, commit and rollback, creation of views, or data integrity constraints are not directly supported by the SQL command. However, because the SQL functionality is imbeddable in a Java application, it is possible to provide equivalent functionality to many of these advanced SQL features by writing custom methods.

 

SQL Syntax

Table names and column names can have 1 to 31 alphanumeric or underscore characters. They should start with a leading alphabetic character.  You are able to use either alphabetic case for table and column names or for SQL keywords in your SQL statements. The Datahub retains the original alphabetic case of table and column names from the table create statement, and uses this for query replies.

SQL character strings must be delimited with single quotes, even when they are a single character. To imbed single quotes in a character string, double them, as in the example: 'Here''s Johnny'.

The Datahub is able to convert data types in SQL statements and expressions, between the basic integer, long, floating point, or character types. For example, the string constant '43' is acceptable as the value of an integer or as the floating point value, 43.0. Similarly, the numeric value, 3.14, if used in context where a character value is expected, will be treated as the character string '3.14'.

There is sufficent information in the reference section of this document to understand the syntax of SQL statements that are accepted by the Datahub. If you are new to relational databases, you may wish to obtain additional information on table design and data normalization.

Database tables are created using the SQL create command, and destroyed using the SQL drop command. Each column of a table is a specific data type such as INTEGER. The supported data types are described with the create command.

Data is added to a table one row at a time using the SQL insert command.   The Java Datahub also features the eval store command which provides for adding more than one row at a time.

The SQL select command is used to query existing data. The SQL update command is used to modify stored data, and the SQL delete command is used to delete data rows.

 

Using Key Columns

When a table is created, one or more columns are specified as primary key columns.  Often there is an obvious key value such a lot ID or a name.   In situations where there is not an obvious key, a unique timestamp value (for example see com.hume.datahub.Localtime) or a sequence number can be used.  When key columns are used, insertion of duplicate rows is not allowed. Said another way, you cannot insert a row into a table if a row already exists in the table with the same values of the key columns. In addition to preventing duplicate rows, keys are used to greatly speed the lookup of rows, when processing the SQL insert, update, delete, or select commands. If all of the values of the key columns are specified, the specific row is located directly without performing comparisons against every row in the table.

Using key columns also causes the select command to return row data in sorted order. In contrast to ANSI standard SQL, the sorted result is obtained by default, without using an ORDER BY clause in the SQL select command. If the keys are VARCHAR data types, the sorted order will be a proper ascending sequence of the primary key columns. If numeric data types are used as keys, the natural ordering is not as useful.  However, the HubTable class has a boolean value which can be set to control whether additional sorting is performed on selection results to properly sort numeric data.  See HubTable.setSortFlag().

 

The Where Clause

When using the select, update, and delete commands you optionally provide a WHERE clause which specifies selection criteria for the affected rows. Also, a WHERE clause can be specified when using the SQL open command to establish a subscription only for rows meeting the WHERE clause selection criteria. A WHERE clause consists of one or more comparisons between a column value and a constant, or two column values. All of the usual SQL comparison operators are supported: equal (=), not equal (<>), less than (<), less than or equal (<=), greater than (>), and greater than or equal (>=). Compound conditions are expressed using the NOT, AND, and OR operators. Parentheses may be used to group terms. All of the comparison operators have equal precedence, and their precedence is higher than the NOT, AND, and OR operators. So, if you write

        select * from equipment where machine='hx14' or machine='hx15'

it is equivalent to

        select * from equipment where (machine='hx14') or (machine='hx15')

since the OR operator binds less tightly than the comparison operator =.

NOT has a higher precedence than AND which has a higher precedence than OR. If you write

    select * from equipment where
class='hx' and not color='green' or machine='zq2'

it is equivalent to

    select * from equipment where
((class='hx') and (not (color='green'))) or (machine='zq2')

Your WHERE clauses can also use the LIKE comparison operator to compare a VARCHAR field to a string constant pattern. Standard SQL uses the % character as a wild card that matches 0 or more characters, and the underscore, _, as a wild card that matches 1 character. So if you write

    select * from equipment where machine like 'hxq%'

you will select machine values such as 'hxq14'.

The LIKE operator has been implemented using the java.util.regex.Pattern class. If you look at the reference for this class, you will see that it has more features than the simple matching characters of the standard SQL LIKE. These powerful features have not been disabled, and you are able to use them in your LIKE operator patterns. However, these features are not standard SQL, and you should consider that these features are not portable to other SQL databases. The other implication of not disabling these features, is that if your LIKE pattern contains special characters such as *, ?, or [ and ], you need to escape them using the backslash character to have them treated as regular characters. You may also precede the SQL wildcard characters % and _ with the backslash character in order to force them to be interpreted as ordinary characters.

 

Subscriptions

Table data subscriptions are opened to have notification messages sent any time specified table data is changed, or, to have a Java method executed any time specified data is changed. There can be any number of subscriptions on each table. For example, you may wish to have each user interface open a subscription to a table that holds active alarm data so that a message is sent to every user interface when a new row representing a new alarm is inserted.

Another example use of a table data subscription is to perform realtime analysis on new measurement data using a method that executes in the Datahub. The analysis code is activated without the overhead of interprocess communication. The logic that produces the measurement data, whether it be an equipment interface, or user interface, is developed, tested, and deployed independent of the analysis logic.

The SQL open command is used to create a table data subscription that provides message notifications. The Java developer uses methods in the HubTable class such as subscriptionAddJava() to create subscriptions that use method calls for notifications.  A subscription is created for one or more columns of a specific table for all rows which match specified selection criteria.  There is a SYNC option that causes notifications for all of the existing data matching the selection criteria at the time the subscription is opened. The SYNC feature makes it easy for a client to synchronize with the current and future states of the application data. The ease of meeting this application requirement stands in marked contrast to other CIM products which offer broadcast or subscription based message propagation without any history.

The SQL close command is used to remove table data subscriptions.

 

DMH Message System Integration

One of the key reasons to use the Datahub is to have the ability to easily share data across application processes, even when they use different programming languages or they run on different computer systems.    It is typical to initialize as a DMH server using the -server argument option of Datahub.main as discussed in the next section.  Other application processes connect to the DMH server as message system clients.  The processes are then able to exchange messages with the Datahub, or with each other.   The usual design concept is that the Datahub process receives command messages that are sent to a well-known mailbox name such as DATAHUB.  The Datahub server executes the messages using the Datahub.sqlReply() method, and sends the return value of the method evaluation as a reply message.  

A process which is connected as a DMH message system client can also receive command messages, act on them, and send replies.   In fact, additional Java Datahub processes can be connected as DMH message system clients and each process can manage its own SQL tables.  So the distinction of client and server in this discussion is to only to specify how the processes interact with the DMH message system, and not how they behave as part of a distributed application.

When the Datahub process is connected to the DMH message system and processing received SQL commands, it is possible to connect to it using a client program that provides the equivalent user interface as the Datahub GUI in the client process.   The client Datahub GUI application gives you the ability to check on the status of the Datahub when it is running on a remote system or running in the background.   The JHubClient.exe is a standalone Datahub GUI client program for Windows that provides a remote Datahub GUI for the Java Datahub.  This program is optionally downloaded from the Hume download site.   You can specify the Datahub DMH command mailbox on the command line to this program which is convenient when defining a Windows Program Item.  The command line syntax is JHubClient MAILBOX@[HostnameOrIP:]DmhGroup as in:

JHubClient  DATAHUB@localhost:mbx

The JHubClient application is also available for other platforms supported by the Hume SDK including Linux.

In addition to the basic conversation model of receiving a command and sending a reply, commands are often sent for execution to the Datahub without the performance overhead of sending a reply, or waiting for a reply, or receiving and processing a reply.  It is a valid assumption on the part of the sender that the message gets delivered to the DMH server as long as the connection stays up.  The DMH client connection provides a separate callback notification if the server becomes disconnected, and similarly the DMH server can be notified when a client disconnects.

Subscription notifications represent another conversation model.  The data consumer sends a single message to the Datahub to register a subscription using the open command.  Subsequently there can be many notification messages sent to the data consumer as changes to the table data occur.  The subscription mechanism can be used in a lightweight and dynamic fashion, for example, to keep charts that are being viewed at a user interface in sync with changing data.

To support testing and development needs, the Datahub GUI provides for interactively sending DMH messages using the SQL Command Console and the eval dmh command.  Enter the command eval dmh help in the console window to see a summary of the syntax. The Hume Datahub SDK documentation contains a separate reference document for the Java DMH library and the com.hume.DMH package is also covered by the Datahub javadocs.  These links provide detailed API information for understanding and using the DMH message system.

An experienced system architect will divide the functions of a complex application into separate processes for a variety of reasons which include performance considerations, reliability, bridging physical locations, and isolating transient or unstable components.  For example, user interfaces may be designed to run on the user's desktops, and equipment interfaces may be deployed on UPS backed systems in a server room.  The DMH message system provides the integration glue needed to connect the disparate systems into a distributed application.

 

Command Line Usage

You can execute the Java Datahub directly as the default application of the Datahub.jar file.  Since the Datahub compiles Java code for tables at runtime, you also need to have the Java compiler tools.jar on your classpath.  A Windows user could execute commands such as:

set JAVA_DIR=c:\Program Files\Java\jdk1.6.0_20
"%JAVA_DIR%\bin\java.exe" -cp "%JAVA_DIR%\lib\tools.jar" -jar Datahub.jar

to run the Datahub using the Datahub.jar in the working directory. 

Command line arguments to the Datahub program can be added to the invocation above:

"%JAVA_DIR%\bin\java.exe" -cp "%JAVA_DIR%\lib\tools.jar" -jar Datahub.jar -server DATAHUB@mbx

A Java developer is more likely to execute the Datahub.main from his own main method.  Here is the startup line with comments from an example application:

        // Show the Datahub GUI and also initialize as a DMH Server.
        // You can run more than one DMH Server if each one has its own DMH
        // group name.  The startup logic insures that only one instance is serving
        // a particular group.  The -setup argument creates a temporary license if needed.
        Datahub.main(new String [] {"-setup", "-gui", "1", "-server", "DATAHUB@mbx"});

The possible command line options are:

-console 1|0
Whether to accept and process commands using standard console IO stdin and stdout.
-gui 1|0
Whether to show the Datahub GUI window.  The Datahub user interface is a tool for developers and not for typical application users, because it makes it easy to drop tables, delete rows, terminate client applications, or make other potentially disruptive changes.
-client MailBox@DmhGroupname
This option instructs the Datahub to join an existing DMH message group as another client of the DMH server. It is mutually exclusive with the -server option.  The Mailbox portion of the argument term is the mailbox that the newly starting Datahub uses to receive command messages.
-server MailBox@DmhGroupname
This option instructs the Datahub to initialize as the server of a DMH message group. It is mutually exclusive with the -client option. The Datahub will exit if another application is already using the socket port which the DMH Groupname argument is mapped to.
-setup
This option creates a temporary license file in the working directory to enable using the program for temporary purposes such as evaluation, demonstrations, or testing.
-include pathname
This option lets you load a file of SQL statements at startup. A file can include other files using the "include pathname" SQL command.
-tableDir directoryPath
The tableDir option sets the file system directory used for generating and compiling TableRow files when processing SQL table create statements. See the Datahub.setTableDir() method documentation to understand the default behavior.

Java API

Here is our frank advice to be a success with Java development using the Datahub.
  1. Use Eclipse - the context sensitive prompting and help is invaluable.
  2. Install the Hume javadocs.  Use the instructions in the Installation section.  Now you have context sensitive help for the Datahub and related classes.
  3. Get started on your application by focussing on the data that your application works with.  Time spent up front in thinking through the organization and relationships of data items is paid back many times during the development and evolution of your application.  Create some tables and work with them.  Don't be afraid to modify the tables as you develop and grow the application.
  4. Modify and Use the MakeTableFiles.java example application.  This application shows you how to prepare TableRow Java files that you need to work with tables using Java code.
This document supplements the reference material in the javadocs, you are advised to look over the Java API since most of the capabilities of the SQL command interface is also provided by Java method calls.  There are also features of the Java API that are not accessible by SQL commands.

SQL Commands

SQL Commands are executed by
  1. executing the Datahub.sqlReply(String sql) method, or
  2. invoking the Datahub.sql(String sql) method, or
  3. sending an SQL command as a message to the Datahub command processing mailbox.
The return value of the Datahub.sqlReply method is formatted text which contains an integer return code.  If the return code value is negative, this indicates an error such as using an unknown table name.  The return code value of 0 indicates the usual successful result, and return code values that are positive numbers indicate warning results.   The Datahub.sql() method is a convenient shortcut for executing sqlReply and parsing the integer code result.   Success and positive return codes are returned directly as the return value of the sql() method.   Negative return code values return SqlExceptions when using this method since negative codes represent exceptional results of an undersirable nature.  For choice (3) above, the return value of Datahub.sqlReply is used directly as the reply message to the received SQL command.   So the example reply strings seen in this reference section are seen directly for choices (1) and (3), and the return code values seen in the replies are seen for choice (1).

Semi-colons are not needed at the end of the SQL statements passed to the Datahub.  Semi-colons are accepted and removed from input without any meaning.  They cannot be used to signify more than one SQL statement in a command argument value.  The Datahub methods expect to receive complete SQL statements as one string argument.  This does not prevent your application from providing a user console that accepts continued lines and uses the semi-colon to recognize the completion of a statement.   

The include and read commands enable you to process SQL statements that are read from a file.  The Datahub input logic interprets newline characters as indicating the end of a statement unless the newline characters are imbedded in SQL 'single quoted string arguments' or imbedded in an eval command argument or other data argument that can span multiple lines with different parsing rules.  Therefore, newline characters are not allowed between the tokens of an SQL command where additional space characters are accepted.


<blank>
<error>
#
CLOSE
CREATE
DELETE
DROP
ECHO
EVAL
EXIT
HELP
INCLUDE
INFO
INSERT
LVAL
OPEN
PING
READ
REM
SELECT
UPDATE
WRITE



 
<blank>
SQL Syntax  
Examples
 
Description A blank input line is somewhat exceptional in that the reply is also a blank line.  The return code for the sql() method is 0.
Replies

  

<error>
SQL Syntax  
Examples
this text is not a valid SQL statement
insert bad syntax here
Description If the input line does not begin with a recognized SQL command, the error message is similar to the following.  If the leading token of the command is recognized, the usage syntax from the help command is shown.
Replies -1 <UNKNOWN> {Unrecognized command, enter "help" for command list.}
-1 INSERT {INSERT INTO table [( column [,column]*)] VALUES ( value [,value]*)}

 
# comment
SQL Syntax # comment text
Examples
# Comments are useful in include files or for annotating console interaction.
# The following statements provide default values that get updated.
################ comments like this get noticed #############################
Description This command lets you lets you add comments to files of SQL commands that will be processed using the SQL include command. The # command and the rem command are equivalent.
Replies 0 # {}

 

CLOSE Close a subscription
SQL Syntax CLOSE SUB subname TO table
Examples

close sub dataSaver to SmifPod

Java Equivalent tableObject.subscriptionClose(subname);
Description This command removes the subscription named subname from the table named table. Use info, or the Datahub GUI to display existing subscriptions.
Replies
0 CLOSE {Table table, subscription subname closed.}
-5 CLOSE {table - table not found.}
-8 CLOSE {Subscription "subname" not found in table table}

 
CREATE Create a new database table
SQL Syntax CREATE TABLE table ( colname type [, colname type]*, PRIMARY KEY(colname [,colname]*))

type := { INT[EGER] | LONG | VARCHAR | VARCHAR(n) | FLOAT | DOUBLE [PRECISION] }

n := non-zero unsigned integer

Examples
"create table SpcType (typecode varchar, description varchar(80), primary key(typecode))"

String TraceMessage = "create table TraceMessage (seq int, timestamp varchar(26), tag varchar(10)," +
" message varchar(15000), primary key(seq))";
int returnCode = Datahub.sql(TraceMessage);
Description The SQL create statement creates an empty database table.

Each column of the table is a specific datatype:

INTEGER | INT
Corresponds to a Java 32-bit signed integer, with the range of -2147483648 to 2147483647.

LONG
Corresponds to a Java 64-bit signed long integer.

VARCHAR(n) | VARCHAR
Character strings are stored Java String values. With VARCHAR(n), you specify n as the maximum number of characters possible for the field. The VARCHAR type declaration without a size indicated in parantheses is equivalent to VARCHAR(1).  The Datahub only uses as much memory as is actually needed to store your data, so there is no added overhead with specifying generous array sizes.   

You can store and retrieve any of the character values supported by Java except for the Unicode Word Joiner character, '\u2060', which is used internally as a separator for primary composite key values.

FLOAT | DOUBLE | DOUBLE PRECISION
The Java Datahub stores these types using double variables with a range of absolute values of >= 2.225073858507201e-308 and <= 1.7976931348623158e+308. Most databases store the FLOAT type as 4 byte single precision floating point values with a smaller range; this may be a portability issue. The Datahub has a static String value doubleFormatSpec which is optionally set to a non-null value to be used with String.format() to control the formatting of all floating point values in query results.


A table must be declared with one or more column values specified as primary keys.  The table data is saved and presented in query results using the natural sorted order of the key column fields.  This is a welcome feature compared to standard SQL where no row ordering is implied.   The use of key values forces there to be no more than one table row at a time that has the same set of key column values.  If there is not a natural key for the table data such as a unique identifier, you may want to use a milliseconds time value, a com.hume.datahub.Localtime high resolution timestamp string,  or an integer sequence number as a single primary key column.

The PRIMARY KEY( colname, [, colname]*) declaration does not have to be the last declaration phrase as long as the column names being declared as keys have already been specified.  You may use more than one PRIMARY KEY declaration phrase to declare a composite key.   The model syntax above only shows the most concise way of declaring keys.

See the MakeTableFiles.java demonstration application for a useful example of preparing TableRow .java files at development time in order to support Java code development.  In brief, you can manipulate Table data using type safe Java method calls if you prepare TableRow .java files as part of your application.

There is no set limit to the number of rows that are stored in a table. If you will be querying tables that have many thousands of rows, you should design your application to use the primary keys for most queries so that performance scales well.

There is no set limit to

  • the total number of characters in an SQL command or
  • the number of lexical tokens in a statement, or
  • the number of columns in a table,
  • or the number of tables

However, if you want to interoperate successfully with the Tcl/Tk Datahub, then you should limit the number of characters in an SQL command to be less than 70,000 and the number of lexical tokens in an input line has to be less than 2500. This imposes a generous limit on the amount of data that can be stored in a single table row. These figures also imply that the maximum number of columns is limited to some value over 1200.

Similarly there is no fixed limit on the size of a query result.  If your operating system and TCP/IP implementation can handle it, you can successfully manipulate multiple megabyte queries.

Use info, or the Datahub GUI to display which tables have been created. Use drop to delete tables.
Replies
0 CREATE {table - table created.}

-2 CREATE {declared key colname unknown}
-4 CREATE {incomplete statement}
-4 CREATE {expected ) to close array size for column colname}
-4 CREATE {expected ( after KEY}
-6 CREATE {missing KEY after PRIMARY}
-6 CREATE {missing declaration of primary key(s)}
-7 CREATE {bad array size for column colname}
-7 CREATE {bad data type for column colname}
-7 CREATE {column "colname" aleady specified}
-7 CREATE {duplicate key colname}
-7 CREATE {improper column name (a keyword perhaps?)}
-12 CREATE {table - table already exists}

 
DELETE Delete rows from a table
SQL Syntax DELETE FROM table [WHERE search_condition]
Examples

delete from alarmsHistory where tsReported < '2010-07-05 00:00:00'

delete from someTable

delete from spcData where toolID like 'HXQ%'

Java Equivalent tableObject.deleteRow(rowObject);
tableObject.deleteWhere(whereClause);
Description This command deletes rows that are selected by the search_condition from the table named table. If no WHERE clause is supplied, all of the rows are deleted. See The Where Clause for the grammar of the search_condition argument.
Replies
0 DELETE {table - 1 row deleted.}
0 DELETE {table - N rows deleted.}

3 DELETE {table - 0 rows deleted.}

-2 DELETE {column "badname" not found}
-5 DELETE {table - table not found.}
negativeReturnCode DELETE {syntaxErrorDescription}

 

DROP Destroy an entire table
SQL Syntax DROP TABLE table
Examples

drop table reportTemp

Java Equivalent Datahub.tableDrop(tableName);
tableObject.drop();
Description This command destroys the named table, freeing the memory resources it was using. Subscriptions on the table are closed as it is dropped.

It is more efficient to empty a table using delete than to drop and re-create it.

Because of the behavior of Java with loading and caching class files, you cannot dynamically drop a table and recreate it using the same name with a different schema. 

Replies
0 DROP {table - Table Destroyed.}

-5 DROP {table - table not found.}

 

 
ECHO send the arguments as a reply
Syntax ECHO textArguments
Examples

echo hello world

Description This command exists for compatibility with the older versions.  It can be useful for inter-process communication performance testing.
Replies
0 ECHO {textArguments}
0 ECHO oneToken
0 ECHO improper\ list\ unclosed\ \{brace


EVAL evaluate custom procedures
Syntax EVAL procName [arg [arg]*]*
Examples

eval dmh help

eval localtime 15

eval pwd

Description This command provides a means for message system clients to execute custom Java commands in the Datahub process.   Use the command "help eval" to obtain a list of the installed custom commands.   Command arguments are parsed using list format conventions where white space and braces or double quotes are used to separate arguments.

The eval command feature is provided for the developer to add his own commands to extend the set of commands available to the Datahub clients.  The Java developer uses the Datahub.addEvalProc() method for this purpose.

There are a number of built-in commands which support features such as obtaining clock values from the server, sending DMH messages, or accessing the file system when using the Java Datahub client application.

Use the eval dmh command to send DMH messages if you have initialized the DMH message system:

# obtain a syntax summary for the eval dmh command
eval dmh help
# obtain the last DMH message read from a mailbox
eval dmh lastRead box
# send a message to a mailbox
eval dmh put box msg
# send a message requesting an asynchronous reply
eval dmh putr destBox replyBox msg
# send a message and wait for the reply
eval dmh xact box msg

Other built-in commands include:

# obtain the server's TCP/IP hostname
eval dp_hostname

# localtime examples
# The default format is "yyyy-MM-ddTHH:mm:ss.mmmmmmZ", the Z implies the UTC timezone (GMT)
eval localtime
0 2010-09-06T15:41:08.900998Z
# format 9 is YYYY-MM-dd HH:mm:ss for the local timezone
eval localtime 9
0 {2010-09-06 10:45:00}
# format 10 is YYYY-MM-dd HH:mm:ss for the UTC timezone
eval localtime 10
0 {2010-09-06 15:45:56}
# format 15 is a higher resolution timestamp for the local timezone
eval localtime 15
0 {2010-09-06 10:47:03.213666}
# format 16 is a higher resolution timestamp for the UTC timezone
eval localtime 16
0 {2010-09-11 13:37:41.229947}

# eval pwd obtains the server's working directory
eval pwd
0 {C:\hume\java}

# Use eval show to have the Datahub show or hide the user interface
eval show gui 1|0

# The store command can insert or update multiple rows of table data at once.
# If a row with the same key value does not exist, an insert occurs,  otherwise the existing row is updated.
# The return code is the number of rows stored.
# eval store table [col [col]*]* {rows | row}
info schema abc
select {abc Schema} {{Column Name} Key Type} {{{Column Name}}} 3 {} {{a 1 varchar(20)} {b {} int} {c {} long}}
eval store abc {a b c} {{one 1 1} {two 2 2}}
2 {2 rows stored in table abc}
# insert
eval store abc {three 3 3}
1 {1 rows stored in table abc}
# update the row just added
eval store abc {three 33 3333}
1 {1 rows stored in table abc}

Replies The reply value of eval commands is a two-element list consisting of a return code integer and a string result.  The expected convention is that the return code value is negative for error, zero for success, or greater than 0 for success with a warning or information message.

 
EXIT shutdown the Datahub process
Syntax EXIT
Examples

exit

Description The exit command is used to terminate the Datahub process.  The termination occurs shortly after the command is received so that the reply message can be sent.
Replies
0 EXIT {shutdown initiated}

HELP display command usage
Syntax HELP [commandName]
Examples

help

help create

Description When used with no arguments, the help command returns a selection result which contains the syntax summaries for all of the SQL commands.  When used with a command name argument, the help command replies with a syntax summary for the particular command.
Replies
0 HELP {command syntax summary}

help insert
0 HELP {INSERT INTO table [( column [,column]*)] VALUES ( value [,value]*)}

 
INCLUDE process a file of SQL commands
Syntax INCLUDE pathname
Examples

include c:/home/cimdev/config.sql

include startupData.tab

Description The include command reads and processes a file of SQL commands. The logic reads each text line in the file as a new SQL command except if the ending newline occurs inside of an SQL string constant or inside of an unfinished list format data item. In the latter cases, additional lines are read until the SQL command is complete.  Semi-colons are ignored, they are not used as command delimiters.

Relative pathnames are with respect to the current working directory of the Datahub process. 

We suggest you specify the pathname using Unix style slashes even on Windows platforms for consistency.  You may optionally surround the pathname argument with SQL style single quotes, or use surrounding braces and or double quotes per list formatting conventions.

The INCLUDE command may be nested inside of included files to arbitrary depths.

Replies
A selection result reply which contains the reply value of every included command.  

INFO
Obtain status information
Syntax INFO { TABLE | SCHEMA table | SUB table | VALUES }
Examples

info table

Description There are four variations of the INFO command:
INFO TABLE
returns a selection result containing current status of the tables which shows the number of rows, columns, and subscriptions for each table.
info table 
select {Table List} {{Table Name} Rows Columns Keys Subs} {{Table Name}} 7 {}
{{abc 4 3 1 0} {abcd 1 4 1 0} {AppConfig 4 3 2 0} {Cassette 25 3 2 0} {SchemaDescription 7 3 2 0}
{SpcRunRule 0 2 1 0} {TraceMessage 110 4 1 0}}
INFO SCHEMA table
returns a selection result with column and key information showing the structure of the table named table.
info schema abc
select {abc Schema} {{Column Name} Key Type} {{{Column Name}}} 3 {}
{{a 1 varchar(20)} {b {} int} {c {} long}}
INFO SUB table
returns a selection result for the subscriptions on the table named table.
info sub abc
select {abc subscriptions} {tablename subscription openStatement}
{tablename subscription} 0 {} {}
INFO VALUES
returns a selection result of name, value pairs.
info values
select {Values List} {Name Value} {Name} 14 {} {{classpath {C:\hume\java}}
{copyright {©2010, Hume Integration Software, www.hume.com}} {cwd {C:\hume\java}}
{java.version 1.6.0_20} {java.vendor {Sun Microsystems Inc.}} {java.vm.version 16.3-b01}
{java.vm.vendor {Sun Microsystems Inc.}} {os.name {Windows XP}} {os.version 5.1}
{queuedNotifications 0} {TableDir {C:\hume\java}} {TablePackageName {}} {tmpdir c:\\tmp\\}
{user.name hume} {version {$Id: javahub_man.html,v 1.4 2011/01/16 20:13:38 hume Exp $}}}
This command is the source of information that is presented using the Datahub GUI or the JHubClient GUI.
Replies
See the specification of the selection list reply format.

INSERT add a new row to a table
Syntax INSERT INTO table [(column [, column]* )] VALUES (value [, value]*)
Examples

insert into SpcChartType (code, description) values ('X', 'XBar, S')

insert into SystemConfig (name, value) values ('HttpPort', '80')

insert into abcd (a,b,c,d) values ('', 0, 0, 0.0);

Java Equivalent tableObject.insert(row);
Description This is a standard SQL syntax for adding a data row to a table. Columns that are not specifically named are given the default values of 0 for numeric fields and the empty string, '', for character data.

As a portability issue you should know that some persistent databases do not allow empty strings to be inserted for character fields that are primary key columns.

Replies
0 INSERT {table - row inserted.}

-2 INSERT {table - column "colname" not in table.}
-3 INSERT {Unable to insert. Unique key violation}
-5 INSERT {table - table not found.}
-7 INSERT {improper valueTypePhrase for column colname}

many other error messages are possible

LVAL evaluate custom methods which return list results
Syntax LVAL procName [arg [ arg]*]*
Examples

lval help

Description This command provides a means for message system clients to execute custom Java commands in the Datahub process.   Use the command "help lval" to obtain a list of the installed custom commands.   Command arguments are parsed using list format conventions where white space and braces or double quotes are used to separate arguments.

The lval command feature is provided for the developer to add his own commands to extend the set of commands available to the Datahub clients.  The Java developer uses the Datahub.addLvalProc() method for this purpose.   The feature provides for returning list or selection results using the same format as the SQL select command. 

There is only one built-in lval command - lval help. 
Replies The execution of an lval command returns a list result similar to an SQL select command. 

 
OPEN create a new message subscription on a table
Syntax

OPEN SUB subname TO table SENDTO=mailbox [REPLYTO=mailbox] [LISTFORMAT [ARGDATA=data]] [INSERT] [UPDATE] [SELECT] [DELETE] [DROP] [CREATE] [SYNC] { * | colname [, colname ]* } [WHERE matchingCriteria ]

Examples

open sub ENGDB to shopdata sendto=ENGBD * where engdb_wants=1 and station='test'

Java Alternatives A Java developer can use the following Datahub methods to register callback objects which are notified using method invocations instead of receiving notification messages.

subscriptionAddJava(SubscriberItf receiver, String subname, String[] subcols, boolean wantInsert, boolean wantUpdate, boolean wantDelete, boolean wantDrop, boolean wantSync, String whereClause, Object argdata)
subscriptionAddJavaSync(SubscriberItf receiver, String subname)

subscriptionAddSql(SubscriberSqlItf sqlReceiver, String subname, String[] subcols, boolean wantCreate, boolean wantInsert, boolean wantUpdate, boolean wantSelect, boolean wantDelete, boolean wantDrop, boolean wantSync, String whereClause, Object argdata)

subscriptionAddSqlSync(SubscriberSqlItf sqlReceiver, String subname)
Description

The SQL open command opens a subscription to the table named table for the purpose of obtaining message notifications when data changes matching the subscription criteria occur.

When opening a subscription, you specify the mailbox to receive the notifications using the SENDTO=mailbox clause. By default, the message notifications will be SQL statements. Specify the LISTFORMAT keyword in order to receive messages that are formatted as list data - see the List Reply Format section for details.  With the List Reply Format, you are also able to specify optional data using the clause ARGDATA=data.  If this argument option is specified, then an additional list element is appended to the message data list and the value of this element is the data argument that you have specified.  Use the ARGDATA option to pass additional context information.

When using the ARGDATA = data option, the data value is formatted as a single argument using the formatting rules of list arguments.  You may use braces { } or double quotes to delimit an argument containing spaces.  Do not use SQL style single quotes.  For example, ARGDATA = {this is correct}, or ARGDATA = "also correct", or ARGDATA = OneTokenCorrect.

You may optionally specify the REPLYTO=mailbox clause to indicate a reply mailbox for the notification messages. The default value of the REPLYTO mailbox is NULL, which is a special value that means no reply mailbox is applicable. You can use the reply mailbox as a mechanism to indicate which Datahub or which subscription a subscription message came from as long as your REPLYTO mailbox value is acceptable as a single token mailbox name.

You are able to choose the type of data changes you wish to receive notifications for by optionally specifying one or more of the keywords INSERT, UPDATE, DELETE, or DROP. If you do not specify any of these data change types, then you receive notifications for all of them.

The SELECT option affects UPDATE notifications.  If the SELECT option is specified, then the update notification includes every column value in the subscription's list whether the column value has been changed or not.  If the SELECT option is not specified, then the update notification only includes column values whose data has been changed by the update.

The SYNC option causes your subscription to be notified immediately with insert notifications for each row in the table selected by your specified or implicit matchingCriteria. This option lets you synchronize with the existing data, and in the same action be registered for future changes. The default behavior is only to notify you of future changes.

The CREATE keyword is used to receive an SQL create table statement as the first SQL notification message. The SQL create table statement is for the entire table, not just the columns that you are subscribing to. Using both the SYNC and CREATE options is a convenient way to get all of the SQL statements needed to replicate a table. For example, you can use the SYNC and CREATE options and specify as the SENDTO destination the mailbox of another datahub, to create a copy of a table that stays in synchronization with changes to the original table.  The CREATE option is only used with SQL formatted message notifications - it cannot be used with the LISTFORMAT option.

Specify a list of specific column names for your subscription, or use * to indicate all columns. Whether you ask for them or not, all of the key columns for the table are included in your column list, and included in all of your notifications. This is done so that your notifications are unambiguous. You specify a column name to indicate you would like to be notified if a value in that column is updated, or you would like to know the new value of that column when a row is inserted. When a row is updated, update notifications always include values for all of the key columns, and values for the non-key columns requested by the subscription if they have been changed in the update.  You may use the SELECT option, as described above to control whether column values that have not changed are included in the update notification.  An update notification is delivered only if at least one subscribed column value has been changed.  For example, if an update statement is processed that specifies updating a field to its current value, no notifications are delivered.

When opening the subscription you optionally supply a WHERE clause to specify row selection criteria for your notifications. The default is that you are notified for changes to any row. See The Where Clause for the grammar of the matchingCriteria argument.

Here is a fine point on update notifications. The where clause is evaluated on the updated row data, not on the original row data. If your where clause specifies comparisons on non-key column values, (1) you will see update notifications for rows that are updated to match the where criteria from a previous unmatching state, and (2) you will not see update notifications for rows that are updated such that they no longer match your where criteria.

If the value of a primary key column (or columns) is (are) updated, the Datahub processes two notifications; a delete notification for the original value of all the key columns, followed by an insert notification for the row data with new key column values. This behavior guarantees that if a WHERE clause is expressed only on values of key columns, the subscription notifications synchronize perfectly with the selected subset of the table. If a single update notification were sent, (1) a selective subscription would not be notified when a row is updated to no longer match the WHERE criteria, and (2) a selective subscription would receive update notifications for rows that it did not previously know about, as they were updated to match the WHERE criteria.

There can be any number of subscriptions on a table.  The subscription name, subname, must be unique among all of the subscriptions to table table. Subscription names follow the same conventions as table and column names, except that they can be as long as 80 characters in length. You use the subscription name with the close command to close a subscription.

As each table data change occurs,  the table subscriptions are scanned and notifications for each matching subscription are queued in sequence for delivery.  The queue of subscription notifications is processed asynchronously by the usual event loop processing with a limit on the continuous amount of time consumed by notification processing so that the usual GUI events can also be dispatched.  Because the notifications are queued and delivered after the actual data change, it is possible that there have been subsequent changes to the table when a notification is received.  In other words, an application cannot assume that it is synchronized with the current state of table data since there may be newer change notifications queued for future delivery.

You should avoid performing lengthy operations, or blocking operations, such as waiting for user input in your subscription handling logic.

The Java Datahub has a subscription feature that can be used to obtain notifications when new Datahub tables are created or when tables are dropped.  The Datahub behaves as if there is a built-in table named tablelist which can be referenced for subscriptions or selections.  Subscription notifications or selections from the tablelist table provide one column value, tablename.  Newly created tables provide an insert notification to the tablelist table subscribers, and dropped tables result in row deletion notifications.  A Java developer can have java  method subscription notifications for the tablelist virtual table using the Datahub.tablelistSubscribe() method.  This feature makes it possible to create an application which can maintain complete synchronization with the table data of a Datahub without polling to discover if new tables have been created.

Replies
0 OPEN {table, subscription subname opened.}

-2 OPEN {table column "colname" not found.}
-5 OPEN {table - table not found.}
-6 OPEN {SENDTO mailbox is missing}
-11 OPEN {subname - duplicate subscription name for table abc}
-14 OPEN {CREATE option requires SQL notification format}

PING verify online responsiveness
Syntax PING
Examples

ping

Description The ping command can be useful for verifying online responsiveness or benchmarking send-and-reply performance from a client workstation.
Replies
0 PING {ServerMailbox isoTimeStamp}

0 PING {DATAHUB@hippie:mbx 2010-09-11T20:45:21.007255Z}

READ create a populated table from a saved file
Syntax READ [OVER] table
Examples

read over lotdata

Description This command restores a table with its data from a previously saved image. The saved image is created using the write command. The OVER option is used in order to replace an existing table using the read command.

The read command expects to find a file of the name table.tab in its current working directory in order to process table table.  

The include command can be used instead of the read command since a saved table image is a file of ordinary SQL statements.  The include command allows for specifying a relative or absolute pathname to the file, and it does not drop an existing table, or delete the existing data, or close subscriptions the way the read over command does.

Subscription information is not saved and restored using read and write. When you use the OVER option and replace a table, existing subscriptions are all closed.

Replies
0 READ {table - Table read from file "table.tab"}

-8 READ {table error reading file "table.tab"}
-10 READ {table exists and overwrite option not set.}

REM comment
Syntax REM comment text
Examples
rem uncomment the following statements for option B 
Description This command lets you lets you add comments to files of SQL commands that are processed using the SQL include command. The # command and the rem command are equivalent.
Replies 0 REM {}

SELECT query table data
Syntax SELECT { * | column [, column]* } FROM table [ WHERE where_criteria ]
Examples

select helpText from HelpConfig where lotID='*' and step='etch'

select * from TraceMessage where timestamp<='2010-12-31'

Java Alternatives tableObject.select(String whereClause)
tableObject.selectAll()
Description select is used routinely to lookup stored table data. You can use the symbol * to specify that you would like all of the column values in your query, or you can explicitly list the columns of interest. If you do not provide a WHERE clause, you obtain all rows of the table. See The Where Clause for the grammar of the where_criteria argument.
Replies

Selection results use the List Reply Format which is described in its own section.

select tablename colList keyColList rowCount errorInfo dataRows

select * from abc
select abc {a b c} a 4 {} {{currentTimeMillis 199 1283108425593} {one 1 1} {three 3 333} {two 2 2}}

select * from abc where 1 = 0
select abc {a b c} a 0 {} {}

select {} {} {} 0 {-5 {table - table not found.}} {}
select {} {} {} 0 {-2 {table - column "colname" not in table.}} {}

UPDATE change existing rows in a table
Syntax UPDATE table SET column=value [ , column=value ]* [ WHERE where_criteria ]
Examples update Alarms set isSet=1 where alarmID=5012 and spname='etch1'

update LotData set processState='Completed' where lotID='6809HXQ15J'
Java Equivalent tableObject.update(tableRow)

tableObject.update(oldRow, newRow)
Description This is the ANSI standard SQL statement for modifying the data of a single table. If you do not provide a WHERE clause, you update all rows of the table. See The Where Clause for the grammar of the where_criteria argument.
Replies
0 UPDATE {abc - N row[s] updated.}
3 UPDATE {table - 0 rows updated.}

-2 UPDATE {table - column "colname" not in table.}
-5 UPDATE {table - table not found.}
-13 UPDATE {unique key violation}
 
 
WRITE save a table to a file
Syntax WRITE [OVER] table [pathname]
Examples

write over TraceMessage

write over SystemConfig backup/SystemConfig.tab

Java Equivalent tableObject.write(String pathname, boolean overWrite)
Description

This command writes a create statement and insert statements representing a table to a file. The file can be read and the table restored using the read command or the include command.

If pathname is specified, it is used as the path and file specification for the table information. Otherwise, the command creates a file by the name of table.tab in the current working directory.

The OVER keyword is used to specify that overwriting an existing file is permitted. 

Replies
0 WRITE {table - table data written to file "file" - 1 row}
0 WRITE {table - table data written to file "file" - N rows}

-9 WRITE {IOException writing file "pathname" : exceptionMessage}
-10 WRITE {table - file "file" exists and overwrite option not set}

 

The List Reply Format

A list is ordinary string text which is structured using space characters or {surrounding braces} to delimit list elements.  Lists are compatible with the list formatting conventions of the Tcl programming language.   The single quotes used by standard SQL are not used for list formats.

The list format described in this section is used for

The format is a seven element string list. Each element is now described.

Element Index Description
notification 0 This element will be the literal string select for a selection result or list reply.  For a subsciption notification, it will be the literal text insert, update, delete, or drop.
tablename 1 This item is the name of the database table. The table name is formatted exactly as it was in the table's create statement.   For non-table selections, the tablename may be more than one token; for example the "info table" command sets this element to Table List.
columns 2 This element is a list of the column names in the same sequence as the data items in the dataList element. For a select statement, the columns are in the order of your request, or in the table create statement order if you queried for *. For a subscription notification, the list will always include the primary keys of the table.
keyColumns 3 This element will be a list of the column names of the primary key columns.
rowCount 4 This element is a count of the data rows in the dataList element. For the usual insert, update, or delete subscription notification, the value is 1 because a separate notification is issued for each row.
errorInfo 5 If your query has an error such as referring to a column or table that does not exist, or a syntax error, this element is a string containing a non-zero numeric return code, followed by a diagnostic message.  If you have a valid query that does not find any data rows, it is not an error, and the errorInfo element is an empty string.
dataList 6 This element is a list of data rows. Each data row has the elements named in the columns element, in the same order. Character strings that have imbedded white space are delimited using braces. There are as many rows as specified by the rowCount element.

A Java programmer can choose among several classes to parse and use list results:

For building lists, the com.hume.DMH.ListBuilder class has better performance and more convenient usage than the above classes.

The Datahub GUI can display example code statements for a selected table using the Table/Program statements... menu action.  Here is example parsing code provided by this feature:

// SQL select
String reply = Datahub.sqlReply("select * from SchemaDescription where " + whereClause);
String [][] data;
String [] colnames;
try {
    TclList parts = new TclList(reply);
    if(parts.llength() != 7) return;
    colnames = TclList.split(parts.lindex(2));
    int colCount = colnames.length;
    TclList rows = new TclList(parts.lindex(6));
    int rowCount = rows.llength();
    data = new String[rowCount][colCount];
    for(int i=0; i < rowCount; i++) {
        TclList row = new TclList(rows.lindex(i));
        for(int j=0; j < colCount; j++) {
            data[i][j] = row.lindex(j);
        }
    }
} catch (ListFormatException e) {}


Notation

BOLD is used to indicate commands, keywords, or punctuation that needs to be entered as written. With the SQL commands, the keywords are written in uppercase for emphasis, but you can enter them in lowercase too. 

Italics are used to indicate names or other items where you supply your own instance value. For example, where we have written table you are expected to supply the name of a table.

We use square brackets to enclose items that are optionally specified. For example [WHERE where_criteria] indicates this clause may be omitted. The square brackets themselves are not entered.

When you see the * character following a phrase, it means the phrase is optionally repeated 0 or more times. For example, column [, column]* indicates that one column name must be supplied, followed by zero or more , column phrases.

Finally, the braces are used to show sets of alternate choices, with the choices separated by the | character. For example, { * | column [, column]* }.  This example is shorthand for choosing either the * character or a comma separated list of column names.  The braces and the | character are part of the notation and are not entered when the command is used.

Compatibility Notes

The new Java Datahub developed by Hume Integration Software is a revision of our Tcl/Tk Datahub which uses C code for its table management and DMH communication features.  The Java Datahub is written entirely in Java and it is a substantial revision with some significant differences.

The remarks in this section provide guidance to porting Datahub applications between the new Java version and the older Tcl/Tk version.

The handling of reply formats and return codes has been re-examined completely and we've said goodbye to the legacy formats of earlier versions.   For selection results, there is no longer a possibility of multiple message replies, the Java Datahub provides only the single reply list format.  So there is no need for the telect alias for the select command to indicate a single list reply is desired.  The Java Datahub accepts the keyword telect as equivalent to select.

We've renamed the dump command to info and changed the format of replies to use the list reply format so that the replies have the same structure as table selection results.

There are no longer set and query commands to manipulate property values such as the message system group name.   Instead there are method calls and command line options to initialize message system integration.  The usual methods of Java programming are used to manipulate other properties such as the current working directory.

The Tcl/Tk Datahub uses the eval SQL command as a gateway to executing Tcl code.  The Java version provides compatibility for the eval command by providing the Java developer a means to create and deploy custom methods written in Java that can emulate Tcl procedures.

The Java version requires that every table is declared with primary key columns.   The Tcl/Tk version supports tables that are not keyed.

The new Java version has some changes to foster compatibility with the open-source persistent database sqlite3.  The insert command has been revised so that naming the columns is optional.   There is also a formatting option for SQL output so that semi-colons are appended to the end of each statement.  This option defaults to true so that SQL statement subscription notifications can be fed to an sqlite3 database with no additional manipulation.

The Datahub uses ordinary class and variable representations for table row data so that it can be directly manipulated using type-safe Java code.   This design choice has performance implications - the Tcl/Tk Datahub out-performs the Java version since it can directly manage table data to minimize memory paging with less overhead.

 

Document Version

Date of last revision: $Date: 2011/01/16 20:13:38 $

This document covers the Hume Integration Software developed Java Datahub that is available for the Java SDK 1.6+ platform on the Windows and POSIX platforms.