oracle sql 加假脱机

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/2560398/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-01 05:52:46  来源:igfitidea点击:

oracle sql plus spool

sqloraclesqlplusspool

提问by CC.

I'm using sql plus to execute a query (a select) and dump the result into a file, using spool option. I have about 14 millions lines, and it takes about 12 minutes to do the dump. I was wondering if there is something to make the dump faster?

我正在使用 sql plus 执行查询(选择)并将结果转储到文件中,使用 spool 选项。我有大约 1400 万行,完成转储大约需要 12 分钟。我想知道是否有什么东西可以使转储更快?

Here below my sql plus options:

下面是我的 sql plus 选项:

whenever sqlerror exit sql.sqlcode
        set pagesize 0
        set linesize 410
        SET trimspool ON
        set heading on
        set feedback off
        set echo off
        set termout off
        spool file_to_dump_into.txt 
        select * from mytable;

Thanks.

谢谢。

回答by Indolent Coder

Are you concatenating & delimiting your columns, or are you exporting fixed-width?

您是连接和分隔列,还是导出固定宽度?

See this documentation on SQL*Plus Script Tuning. Specific to your script, here are a few possible ways to speed it up:

请参阅有关SQL*Plus 脚本调整的文档。特定于您的脚本,这里有几种可能的方法来加快速度:

  1. Make sure LINESIZE is as small as possible. Add your max column lengths (plus delimiters if not fixed-width). This can have a dramatic effect on performance, as SQL*Plus allocates that amount of memory for every exported line. 410 isn't that big, but if you can decrease it that would help. This has made a big difference, in my experience.
  2. Don't turn TRIMSPOOL on. This can also have a big impact. Each line will then be padded out to LINESIZE, but with an optimal linesize, and depending on how you're using the file, that may be acceptable. However if you want to elminate trailing spaces entirely, it can often be faster to trim them using other methods post-export.
  3. Play around with ARRAYSIZE. It may help (a little). It sets the fetch size for SQL*Plus. Default is 15 rows. Bumping to, say, 100 may help, but going too large might decrease speed.
  1. 确保 LINESIZE 尽可能小。添加您的最大列长度(如果不是固定宽度,则加上分隔符)。这会对性能产生巨大影响,因为 SQL*Plus 会为每个导出的行分配该数量的内存。410 不是那么大,但如果你能减少它会有所帮助。根据我的经验,这产生了很大的不同。
  2. 不要打开 TRIMSPOOL。这也会产生很大的影响。然后每一行都将被填充到 LINESIZE,但使用最佳行大小,并且取决于您使用文件的方式,这可能是可以接受的。但是,如果您想完全消除尾随空格,在导出后使用其他方法修剪它们通常会更快。
  3. 玩转 ARRAYSIZE。它可能有帮助(一点点)。它设置 SQL*Plus 的提取大小。默认值为 15 行。例如,撞到 100 可能会有所帮助,但太大可能会降低速度。

Hope this helps!

希望这可以帮助!

回答by Matthew Watson

You might find it quicker to use UTL_FILE, but probably not that much faster.

您可能会发现使用 UTL_FILE 更快,但可能没有那么快。

in my test it was slightly faster over about 20k of rows, blow that out over 14 million though and it might be worth it.

在我的测试中,它在大约 20k 行上稍微快一点,虽然超过 1400 万行,但它可能是值得的。

I believe if you want to get any quicker than this, the way to go would be pro*c.. but I haven't got into that, so can't really advise.

我相信如果你想比这更快,那么要走的路将是 pro*c .. 但我还没有深入研究,所以不能真正提出建议。

set pagesize 1000
set FLUSH OFF
drop user usera cascade;
create user usera default tablespace users identified by abc123;
grant create session to usera;
grant resource to usera;

create or replace directory testdir as '/tmp';
grant read,write on directory testdir to usera;
grant execute on UTL_FILE to usera;

connect usera/abc123;

set timing on

spool /tmp/spooltest.txt
select object_name from all_objects;
spool off

DECLARE
 v_file UTL_FILE.FILE_TYPE;
 TYPE t_col is table of all_objects.object_name%type index by PLS_INTEGER;
 v_object_names t_col;

BEGIN
  v_file := UTL_FILE.FOPEN('TESTDIR','utlfiletext.txt','w');

  select object_name BULK COLLECT INTO v_object_names
  from all_objects;

  for idx IN 1 .. v_object_names.COUNT LOOP
    UTL_FILE.PUT_LINE(v_file, v_object_names(idx), FALSE);
  END LOOP;

   UTL_FILE.FCLOSE(v_file);
END;
/

The results. The top result being from sqlplus only, the bottom using UTL_FILE

结果。顶部结果仅来自 sqlplus,底部使用 UTL_FILE

23931 rows selected.

Elapsed: 00:00:06.60

PL/SQL procedure successfully completed.

Elapsed: 00:00:05.45

回答by Quassnoi

With a typical query, 14Mrecords is at least several hundred megabytes of data to fetch out of the server, pass across the connection and save to the disk.

对于一个典型的查询,14M记录至少有几百兆字节的数据可以从服务器中取出,通过连接并保存到磁盘。

Given this, 12minutes does not sound too much to me.

鉴于此,12分钟对我来说听起来并不多。

However, it is still possible that your query can be optimized. Could you please post it here?

但是,您的查询仍有可能被优化。你能把它贴在这里吗?

回答by Kuberchaun

So is this going over the wire or are you logged into the box that has the database? If you have access, maybe you can run your sqlplus session on the box where the database lives and zip the file up then send the file to your local machine. It might be faster to send a big file over the wire instead of sending millions of smaller records. Of course this won't make it super fast, but might shave some time off.

那么这是通过网络还是您登录到具有数据库的框?如果您有权访问,也许您可​​以在数据库所在的机器上运行 sqlplus 会话并将文件压缩,然后将文件发送到您的本地机器。通过网络发送一个大文件而不是发送数百万条较小的记录可能会更快。当然,这不会使它变得超快,但可能会减少一些时间。

Also with that much data do you really need to spool it to file? Can you do an export instead?

还有这么多数据,你真的需要将它假脱机归档吗?你可以做一个出口吗?

回答by user38123

You can enable output buffering by adding to you script

您可以通过添加到脚本来启用输出缓冲

SET FLUSH OFF

But result depends on your OS.

但结果取决于您的操作系统。

回答by AndyDan

When getting a lot of results from a query in SQL*Plus, I've found that one thing that takes a lot of time is the actual displaying of the data. If you're spooling the data to a file, you can SET TERMOUT OFF, and the query runs much faster since it doesn't have to spend the time to write it to the screen.

当从 SQL*Plus 中的查询获得大量结果时,我发现需要花费大量时间的一件事是数据的实际显示。如果您将数据假脱机到文件中,则可以SET TERMOUT OFF,并且查询运行得更快,因为它不必花时间将数据写入屏幕。

回答by Adam Hawkes

Some options are available from Tom Kyte, who is a real guru.

Tom Kyte提供了一些选项,他是一位真正的大师。