I've been using DataSnap very successfully for years now. Lately I've been optimizing it for the best possible performance, specially when dealing with a large number of records.
Andreas Hausladen did a great job with his Midas SpeedFix, but there is more.
I have a simple table named "streets" containing some fields: ID (Integer), NAME (varchar) and a few other fields (it is a large DataSet contaning all street names of all cities of my state). Well, I'm using ADO (dbGo) to access it. The table has 50,000 records. I have an ADOQuery with this SQL statement:
SELECT * FROM streets
Connected to this ADOQuery I have a DataSetProvider and a ClientDataSet. When I open the ADOQuery it took exactly 1.5 seconds, but when I open the ClientDataSet it took 59 seconds!!! 59 seconds to open a query is completely out of question in a production environment.
Some people will say: "ADO didn't fetch all the records, so the difference". That's not true. You can open the ADOQuery and go to the LAST record (fetching all the 50,000 records), and the time is the same. 1.5 seconds to fetch 50,000 records! Nice number.
Other people will say: "You can't have a 50,000 records ClientDataSet!". Can't I? Why not? 15 years ago they told me to "fetch only a few records, not thousands!". Nowadays we have 8 Gb RAM application servers, 10 Mpbs internet and the same limits remain? ClientDataSets can be used as a cache mechanism, freeing the database server from returning the same resultsets over and over again, but I need to put more than a few hundred records in it! Besides that, there are briefcase model applications. How one can create a briefcase application using DataSnap if the DataSnap framework imposes such a low limit?
So I decided to find out why ADO takes only 1.5 seconds and DataSnap took 40 times more. Profiling Provider.pas I found out that most of that time is spent inside TDataSetProvider.InternalGetRecords method. TDataSetProvider.InternalGetRecords calls inherited TBaseProvider.InternalGetRecords, and it calls TDataSetProvider.CreateDataPacket. There is a really long chain of method calls, but in the end I discovered that the bottleneck is TDataPacketWriter.WriteDataSet. Looking at this method we may have a clue:
while (not DataSet.EOF) and (Result <>
As we can see, there is a while loop going through all records of the DataSet. Well, we know that this kind of construction can be very slow because each DataSet.Next call fires a long chain of events if DataSet.ControlsDisabled = False, that is, if we didn't call DataSet.DisableControls before entering the loop.
So, just for testing purposes, I did a little change in the code, like that:
The results were incredible: From 59 seconds, the time spent drop to 3.3 seconds, 95% better than before! The difference is bigger, the bigger is the number of rows in the DataSet.
There are two practical problems with this approach:
1) Call DataSet.DisableControls programatically in every point of the code where there is a ClientDataSet connected to a DataSet provider is totally out of the question;
2) DisableControls cannot be used when the DataSet is acting as a Master, in a Master/Detail relationship.
Another interesting think that I've found is that DBExpress doesn't suffer the same problem, that is, DisableControls makes little difference (I will benchmark it too).
I will write about the solution I've created in a new post.