I will confess that I’ve tended to stick with ye olde exp and imp (export and import) utilities because I know them reasonably well and they do the job. However, the writing has been on the wall for both of them for quite a few years and you’re supposed to have been progressively switching over to using the all-new, all-dancing expdp and impdp replacements. For the stuff I use these utilities for, however, I’ve managed not to need the new versions… until today. We recently upgraded all our databases to 11.2 from 10.2.0.4 -except one, which got converted to 11.1 months ago- and when you try to do a traditional export against the 11.1 database, it terminates with an error:
. exporting materialized views
. exporting snapshot logs
EXP-00008: ORACLE error 1455 encountered
ORA-01455: converting column overflows integer datatype
EXP-00000: Export terminated unsuccessfully
Hunting around the subject, it seems there’s a bug in late versions of exp that make this to be expected, which is a bit of a show-stopper. However, those same bug reports all unanimously declare that the new data pump version of export does not suffer from the same problem. So, finally, it’s time to switch to using the new “dp” utilities after all!
The trouble is, however, that we’ve used the traditional utilities to pull production data over to development environments. Whilst logged onto the dev server, for example, I’d do something like:
exp "sys/[email protected] as sysdba" file=prod_exp.dmp full=y direct=y
…and the export file, called “prod_exp.dmp” would be produced in whatever directory I happened to be sitting in when I typed the command, on the dev server. Production data was thus dumped out to a dev server directory, thanks to the use of a tns alias in the connection string bit of the command. Put another way: the export utility runs locally on my development server, but knows to connect to my production database to fetch its data.
Unfortunately, this is not how the Data Pump utilities work. They always run on the server and never on the client. Which means that, other things being equal, the output file is always on the server, too -even if the utility is invoked remotely!
So, for example, you could issue this command whilst logged into the development server:
expdp "sys/[email protected] as sysdba" directory=dumpdir file=prod_exp.dmp full=y
…and the thing would work just fine. But when you then navigate the file system of your development server, you will not be able to find a file called “prod_exp.dmp” anywhere! And that’s because the file has been written on the production server, in whatever location the “dumpdir” directory object is pointing to on that server.
You can invoke data pump export remotely, in other words, but all the action and all the output will only ever happen on the remote database. If you want the dumpfile to be sitting on your development server’s hard disk at the end of the exercise, this sort of syntax won’t achieve that, I’m afraid!
But this, happily, will:
expdp "sys/password as sysdba" network_link=prodserver directory=dumpdir file=prod_exp.dmp full=y
With this syntax, typed on the dev box, you’re now connecting locally -that is, to the development database. That therefore means you’re going to be outputting to whatever part of the development server’s file system is referenced by the dev box’s “dumpdir” directory object. But, the new parameter network_link means that the development database doing all the work will know to connect to whatever database is at the end of the “prodserver” database link in order to get its data (so, obviously, for this syntax to work, I must first have issued a command such as create database link prodserver connect to system identified by password using ‘dbprod’). This syntax -a local connection string, but an extra network_link parameter- therefore achieves what the old export command did: production data “pulled” across to my development server and ending up as a dump file on my dev server’s hard disk.
In a word, the trick is not to invoke the data pump utility with an old-fashioned “@somewhere” to achieve a ‘remote connection’, but to use one of the new parameters the new utility makes available to you to have the data ‘pumped’ from a remote location. Easy -though not perhaps entirely obvious!