使用批量插入在 oracle 中的表之间移动大数据

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/19838517/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-19 02:04:16  来源:igfitidea点击:

Move large data between tables in oracle with bulk insert

sqloracleplsqloracle11gbulkinsert

提问by Hyman

I want to move 1 million rows of data to another table. Im using query:

我想将 100 万行数据移动到另一个表。我使用查询:

insert into Table1
select * from Table2;

in my PL/SQL function. But this way is too slow.

在我的 PL/SQL 函数中。但是这种方式太慢了。

How can I do this with Bulk Insertmethod?

如何使用批量插入方法执行此操作?

  • Source and destination table has same structure.
  • Tables have hash partition and 1 index.
  • 源表和目标表具有相同的结构。
  • 表具有哈希分区和 1 个索引。

回答by SriniV

Forget about bulk insert. Because the insert into select is the best bulk you can load. The fastest would be to disable the indexes (mark them unusable) and do this in a SINGLE insert:

忘记批量插入。因为 insert into select 是您可以加载的最佳批量。最快的方法是禁用索引(将它们标记为不可用)并在单个插入中执行此操作:

insert /*+ append */ into TARGET
select COLS
  from SOURCE;

commit;

and rebuild the indexes using UNRECOVERABLE (and maybe even parallel).

并使用 UNRECOVERABLE(甚至可能是并行的)重建索引。

PS:If the table is partitioned (Both source and target, you can even use parallel inserts)

PS:如果表是分区的(源和目标都可以,甚至可以使用并行插入)

FOLLOW UP:

跟进:

Check the performance of the below select

检查以下选择的性能

SELECT    /*+ PARALLEL(A 4)
            USE_HASH(A) ORDERED */
      YOUR_COLS
FROM
      YOUR_TABLE A
WHERE
      ALL_CONDITIONS;

If faster then

如果再快一点

INSERT   /*+ APPEND */
     INTO
      TARGET
    SELECT /*+ PARALLEL(A 4)
            USE_HASH(A) ORDERED */
          YOUR_COLS
    FROM
          YOUR_TABLE A
    WHERE
          ALL_CONDITIONS;