Infor M3 #5 -- How to run SQL queries in MAK

How to run SQL queries in MAK

Sometimes we may find difficult to use normal database methods to retrieve data from DB tables for complex logic, sometimes we may end up creating multiple indexes to get the accurate results in a single shot instead of looping the records and filtering them by checking few conditions.

Isn't this process tedious? to make your work easier there is a facility to run SQL queries directly from MAK.


We have two types of SQL query execution in MAK as mentioned below
  1. SQL query on a specific table
  2. SQL query on multiple tables
Let us see the syntax and the behaviour of them in MAK

1. SQL query on a specific table

Well, this is pretty much as select * from XXXXXX query, which would give us the entire table's data. We execute SQL queries on the session object, which holds the current user's session.
Syntax:
session.OPEN_SQL(query, def, nrOfRecords);
Return type:
Byte[][]
Parameters
  • query refers to the SQL query you want to pass i.e., select * from table where conditions
  • def refer to the table on which the query details to be saved
  • nrOfRecords refers to the maximum number of records you wish to retrieve
This method returns a two-dimensional array of type byte, we should convert byte into a MvxRecord then assign it to the DB Table object then the retrieval process continues normally.

It is obvious that when we handle SQL through code, we must handle SQL exceptions as well otherwise we will end up with dumplog.

All the records retrieved from the session object would be stored into Java Heap space, so we should have a limit in providing the number of records to be retrieved.

Let us take an example to retrieve all items from MITMAS where CHID is "MyUserID"

String userId = "MyUserID";
String query = "select * from MITMAS where MMCHID='"+userId+"'";
String def = "MITMAS";
int nrOfRecords = 1000;
try {
                      for (byte[] record_byte : session.OPEN_SQL(query, def, nrOfRecords)) {
if (record_byte == null)//End of records
break;
MvxRecord MITMAS_record = new MvxRecord(record_byte); // creates a mvxrecord
ITMAS.setRecord(MITMAS_record);//Assigns the record to MITMAS
ITMAS.getITNO(); // get data as usual
}
} catch (Exception e) {
                       //There are no records exists
}

As we can't predict the number of records present in the table we make a null check to make sure the program won't get terminated abnormally

2. SQL query on multiple tables

Well, this method is used to execute either limited columns retrieval or join queries in SQL, which would give us the data based on the columns we have written in the query.

Syntax:
session.SQL(query, nrOfRecords);
Return type:
MvxStrg[]
Parameters
  • query refers to the SQL query you want to pass i.e., select column_name from table where conditions
  • nrOfRecords refers to the maximum number of records you wish to retrieve
This method returns a single-dimensional array of type MvxString (Stream of characters based on the query), you should divide the MvxString using a delimiter \t to segregate the columns data.

you must know the position of each column in the retrieved MvxString.

It is obvious that when we handle SQL through code, we must handle SQL exceptions as well otherwise we will end up with dumplog.

All the records retrieved from the session object would be stored into Java Heap space, so we must have a limit in providing the number of records to be retrieved.

Let us take an example to retrieve the work centre name from MPDWCT by providing the work centre from  MPDOPE

String query = "select PPPLNM, POOPNO, PPDEPT from MPDWCT, MPDOPE where PPPLGR= POPLGR and POPRNO='ProductXX';
int nrOfRecords = 10;
try {
boolean isFirstRecord = false;
                      for (MvxString record_string : session.SQL(query, nrOfRecords)) {
if (record_string == null)//End of records
break;
if(!isFirstRecord ){
String[] columns = record_string.toString().split("\t");
columns[0]; // PLNM
columns[1]; // OPNO
columns[2]; // DEPT
      }else{
isFirstRecord =true;
}
}
} catch (Exception e) {
                       //There are no records exists
}

As we can't predict the number of records present in the table we make a null check to make sure the program won't get terminated abnormally.

Advantages

  • We can avoid creating indexes to get data on those columns which are not in any index
  • No more bothering about Fields sequence in any index
  • No more looping through records to filter the data on some checks
  • We can use aggregate functions to fetch sum, avg, count etc., in session.SQL() method

Disadvantages

Well, they told the great power comes with great responsibility 😉

As said earlier, data of session gets stored in Java Heap memory so we should be very careful while choosing the number of records to be retrieved. If heap size is completely used then there would be a dumplog generated.


That's it, come back here and tell us about the before-and-after. I bet you'll have something to say!

Please let me know your thoughts in the comment section below!

If you enjoyed this post, I'd be very grateful if you'd help it spread by emailing it to a friend, or sharing it on Twitter or LinkedIn.

In our next post let us see about Infor M3 Standard Search feature

Thanks for reading!

--Saiteja Saadi

Comments

  1. Very interesting article!

    I think it would make sense to have a generic MI-Programm that takes a sql query as an input value and then generates an output according to the select statement in the sql query. The problem is maybe that the output-datastructure of the MI transaction can not be flexible or change according to what input string the user provided? How do you see it?

    ReplyDelete
  2. Hey, Thanks for reading the post.. Idea is very interesting to make it generic but the concern is the size of input to MI and the output from MI.

    I will give it a try and update if it is a success!

    ReplyDelete
  3. Yes, i am the same opinion with the size of input to MI and the output from MI, it is limited in M3.
    But maybe there is a trick to use a fixed input and output structure with multiple fields.
    Like splitting the input SQL in several input fields to reach a bigger input size, then in MAK concatenate the inputs to one big SQL statement and execute the query, at least predefined output-datastructure is filled according to the selected fields.

    ReplyDelete
    Replies
    1. Yes that's true, I am working on this Idea.We could use multiple input fields to take input to M3 and concatenate but there would be problem with output as we can't guess anything cause it is completely based on the query. Here's how I see it, if we give output like column1|coulmn2|coulmn3|... then whoever is using this MI could take the output string and divide the data using '|' delimiter and use it for further process.

      What do you think?

      Delete
    2. Hi,
      i understand your idea of seperating the output with "|" character, but you have to consider that the "|" character can also be a part of an item description or a customer name. Then it would be nearly impossible to divide the result of the MI.

      You could instead output a json-string for each result line. The MI would then have one output value = json-string with a maximum lenght of circa 980 characters. MeC, Scripting and many external applications should be able to parse and handle a json-string. That would be a really flexible solution then.
      The only problem i see here is mashups where json parsing is (i think) not supported. In Mashups we really need a output format like the standard MIs or some REST, SOAP thing.

      Delete
    3. I'm happy that you like the idea.
      Maybe we can find a solution for mashups as well?

      Delete
    4. Not sure about Mashups, I haven't worked on it 🤔

      Delete
  4. Hello,
    Thanks for this wonderful article. I am working on similar kind of requirement.
    I have below queries:
    1. Can we write the extracted data of SQL query to csv file?
    2. New MAK program to be developed for this?
    3. How much is the data volume we can get from the result of execution of query? In my case
    data volume can go up to 20k?

    Thanks,
    Rohit

    ReplyDelete

Post a Comment