1

The requirements are to take an existing list of identical tables (AAA0001 - AAA9999) which generally don't meed our current requirements and need to be swapped out with three tables (XXX0001-XXX999, YYY0001-YYY999 and ZZZ0001-ZZZ999).

Currently, the guy who is programming this update which is supposed to take place on our clients machines who run both mysql and microsoft SQL Server is running select and insert queries which is bound to make the next update the slowest and most painful in the history of our product.

I'd think we could so something faster by dumping the tables to a database, doing some string manipulation and undumping the database.

However, we've never used any database connectors besides ODBC for our program and don't necessarily have the faculties to do both a mysql and a SQLServer dump (unless it's easy).

So, what's the speediest way to dump and undump rows over ODBC? If there is no speedy way what's the safest and easiest to implement alternative?


We use Delphi XE2 Enterprise, if anyone has any 3rd party DB tools or knows any shortcuts. We've never used datasnap, as we didn't have the "good version" of Delphi until we updated to XE2, so I'm clueless as to whether it's got this functionality.

1
  • 1
    Look into BCP and Bulk Insert operation. YOu can bulk insert millions of rows very quickly. You might be able to use that when moving daa around.
    – Jon Raynor
    Commented Mar 13, 2012 at 21:09

3 Answers 3

1

I would do this directly on the SQL server..

Restore the Source DB onto the Target DB Server Then Create a bunch of Insert Statements

INSERT INTO TargetDb.dbo.XXX0001 ("column1", "column2", ...)
SELECT "column3", "column4", ...
FROM SourceDb.dbo.AAA0001
--
INSERT INTO TargetDb.dbo.XXX0001 ("column1", "column2", ...)
SELECT "column3", "column4", ...
FROM SourceDb.dbo.AAA0002
--
INSERT INTO TargetDb.XXX0001 ("column1", "column2", ...)
SELECT "column3", "column4", ...
FROM SourceDb.dbo.AAA0003

OR Use The SQL Server import Utility
enter image description here

1
  • That's just a good idea. The second part I can't do. I think we could do the first. The only problem is we've got to unpack something we can only figure out with recursive queries. But I think the utility would still be fast - especially if all the blob moving is done in a insert into ... select statement. Commented Mar 13, 2012 at 19:25
1

It has a slight learning curve but SQL Server Integration Services (SSIS) is built for doing exactly what you want and it does it quite quickly too. It supports straight data dumps and even data transformation using a "pipeline" approach to and from SQL Server and almost any other data source you can throw at it.

Rhino ETL is another tool that couples the speed and power of SSIS with a more code-friendly approach to ETL.

1

The fastest way would be to use set-based operations on the server using odbc as a simple SQL command pass-through. Would work something like:

1) Run SQL DDL to create the new table
2) Run sql command to SELECT from the old table INTO the new table with required data massaging.

Doing anything row-by-row on the client-side will be orders of maginitude slower than this sort of approach but it has some advantages. Especially if the data transformation step gets complex.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.