C# SqlBulkCopy 不工作

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/438587/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-04 03:25:03  来源:igfitidea点击:

SqlBulkCopy Not Working

c#asp.netsqlbulkcopy

提问by

I have a DataSetpopulated from Excel Sheet. I wanted to use SQLBulk Copy to Insert Records in Lead_Hdrtable where LeadIdis PK.

我有一个DataSet来自 Excel 工作表的填充。我想使用 SQLBulk Copy 在PKLead_Hdr表中插入记录LeadId

I am having following error while executing the code below:

执行以下代码时出现以下错误:

The given ColumnMapping does not match up with any column in the source or destination

给定的 ColumnMapping 与源或目标中的任何列都不匹配

string ConStr=ConfigurationManager.ConnectionStrings["ConStr"].ToString();

using (SqlBulkCopy s = new SqlBulkCopy(ConStr,SqlBulkCopyOptions.KeepIdentity))
{
    if (MySql.State==ConnectionState.Closed)
    {
        MySql.Open();
    }

    s.DestinationTableName = "PCRM_Lead_Hdr";
    s.NotifyAfter = 10000;

    #region Comment
    s.ColumnMappings.Clear();

    #region ColumnMapping
    s.ColumnMappings.Add("ClientID", "ClientID");
    s.ColumnMappings.Add("LeadID", "LeadID");
    s.ColumnMappings.Add("Company_Name", "Company_Name");
    s.ColumnMappings.Add("Website", "Website");
    s.ColumnMappings.Add("EmployeeCount", "EmployeeCount");
    s.ColumnMappings.Add("Revenue", "Revenue");
    s.ColumnMappings.Add("Address", "Address");
    s.ColumnMappings.Add("City", "City");

    s.ColumnMappings.Add("State", "State");
    s.ColumnMappings.Add("ZipCode", "ZipCode");
    s.ColumnMappings.Add("CountryId", "CountryId");

    s.ColumnMappings.Add("Phone", "Phone");
    s.ColumnMappings.Add("Fax", "Fax");
    s.ColumnMappings.Add("TimeZone", "TimeZone");
    s.ColumnMappings.Add("SicNo", "SicNo");
    s.ColumnMappings.Add("SicDesc", "SicDesc");

    s.ColumnMappings.Add("SourceID", "SourceID");
    s.ColumnMappings.Add("ResearchAnalysis", "ResearchAnalysis");
    s.ColumnMappings.Add("BasketID", "BasketID");
    s.ColumnMappings.Add("PipeLineStatusId", "PipeLineStatusId");

    s.ColumnMappings.Add("SurveyId", "SurveyId");
    s.ColumnMappings.Add("NextCallDate", "NextCallDate");
    s.ColumnMappings.Add("CurrentRecStatus", "CurrentRecStatus");
    s.ColumnMappings.Add("AssignedUserId", "AssignedUserId");
    s.ColumnMappings.Add("AssignedDate", "AssignedDate");
    s.ColumnMappings.Add("ToValueAmt", "ToValueAmt");
    s.ColumnMappings.Add("Remove", "Remove");
    s.ColumnMappings.Add("Release", "Release");

    s.ColumnMappings.Add("Insert_Date", "Insert_Date");
    s.ColumnMappings.Add("Insert_By", "Insert_By");
    s.ColumnMappings.Add("Updated_Date", "Updated_Date");
    s.ColumnMappings.Add("Updated_By", "Updated_By");

    #endregion
    #endregion

    s.WriteToServer(sourceTable);

    s.Close();

    MySql.Close();
}

采纳答案by Marc Gravell

Well, is it right? Do the column names exist on both sides?

嗯,对吗?列名是否存在于两侧?

To be honest, I've never bothered with mappings. I like to keep things simple - I tend to have a staging table that looks like the input on the server, then I SqlBulkCopyinto the staging table, and finally run a stored procedure to move the table from the staging table into the actual table; advantages:

老实说,我从来没有为映射而烦恼。我喜欢保持简单——我倾向于有一个看起来像服务器上的输入SqlBulkCopy的临时表,然后我进入临时表,最后运行一个存储过程将表从临时表移动到实际表中;好处:

  • no issues with live data corruption if the import fails at any point
  • I can put a transaction just around the SPROC
  • I can have the bcp work without logging, safe in the knowledge that the SPROC will be logged
  • it is simple ;-p (no messing with mappings)
  • 如果导入在任何时候失败,则不会出现实时数据损坏的问题
  • 我可以在 SPROC 周围进行交易
  • 我可以在不记录的情况下让 bcp 工作,知道 SPROC 将被记录是安全的
  • 很简单;-p(不会弄乱映射)

As a final thought - if you are dealing with bulk data, you can get better throughput using IDataReader(since this is a streaming API, where-as DataTableis a buffered API). For example, I tend to hook CSV imports up using CsvReaderas the source for a SqlBulkCopy. Alternatively, I have written shims around XmlReaderto present each first-level element as a row in an IDataReader- very fast.

最后一个想法 - 如果您正在处理大量数据,则可以使用IDataReader(因为这是一个流 API,而DataTable缓冲 API 也是如此)获得更好的吞吐量。例如,我倾向于使用CsvReader作为 SqlBulkCopy 的源来连接 CSV 导入。或者,我已经编写了 shimsXmlReader以将每个第一级元素显示为一行IDataReader- 非常快。

回答by Tony Basallo

The answer by Marc would be my recomendation (on using staging table). This ensures that if your source doesn't change, you'll have fewer issues importing in the future.

Marc 的答案是我的推荐(关于使用临时表)。这可确保如果您的来源不发生变化,您将来导入的问题就会更少。

However, in my experience, you can check the following issues:

但是,根据我的经验,您可以检查以下问题:

Column names match in source and table That the column types match

列名在源和表中匹配 列类型匹配

If you think you did this and still no success. You can try the following.

如果你认为你这样做了,但仍然没有成功。您可以尝试以下操作。

1 - Allow nulls in all columns in your table 2 - comment out all column mappings 3 - rerun adding one column at a time until you find where your issue is

1 - 允许表中所有列中的空值 2 - 注释掉所有列映射 3 - 重新运行一次添加一列,直到找到问题所在

That should bring out the bug

这应该会带来错误

回答by Tony Basallo

What I have found is that the columns in the table and the columns in the input must at least match. You can have more columns in the table and the input will still load. If you have less you'll receive the error.

我发现表中的列和输入中的列必须至少匹配。您可以在表中有更多列,并且输入仍将加载。如果你有更少,你会收到错误。

回答by Tareq

I've encountered the same problem while copying data from access to SQLSERVER 2005 and i found that the column mappings are case sensitive on both data sources regardless of the databases sensitivity.

我在从 SQLSERVER 2005 访问复制数据时遇到了同样的问题,我发现无论数据库敏感度如何,列映射在两个数据源上都区分大小写。

回答by jocheng

Thought a long time about answering... Even if column names are case equally, if the data type differs you get the same error. So check column names and their data type.

想了很长时间来回答......即使列名大小写相同,如果数据类型不同,你也会得到同样的错误。因此,请检查列名及其数据类型。

P.S.: staging tables are definitive the way to import.

PS:暂存表是明确的导入方式。

回答by Asad

One of the reason is that :SqlBukCOpy is case sensitive . Follow steps:

原因之一是 :SqlBukCOpy 区分大小写。请按照以下步骤操作:

  1. In that Case first you have to find your column in Source Table by using "Contain" method in C#.
  2. Once your Destination column matched with source column get index of that column and give its column name in SqlBukCOpy .
  1. 在这种情况下,首先您必须使用 C# 中的“包含”方法在源表中找到您的列。
  2. 一旦您的目标列与源列匹配,获取该列的索引并在 SqlBukCOpy 中给出其列名。

For Example:`

例如:`

//Get Column from Source table 
  string sourceTableQuery = "Select top 1 * from sourceTable";
   DataTable dtSource=SQLHelper.SqlHelper.ExecuteDataset(transaction, CommandType.Text, sourceTableQuery).Tables[0];// i use sql helper for executing query you can use corde sw

 for (int i = 0; i < destinationTable.Columns.Count; i++)
                        {    //check if destination Column Exists in Source table
                            if (dtSource.Columns.Contains(destinationTable.Columns[i].ToString()))//contain method is not case sensitive
                            {
                                int sourceColumnIndex = dtSource.Columns.IndexOf(destinationTable.Columns[i].ToString());//Once column matched get its index
                                bulkCopy.ColumnMappings.Add(dtSource.Columns[sourceColumnIndex].ToString(), dtSource.Columns[sourceColumnIndex].ToString());//give coluns name of source table rather then destination table so that it would avoid case sensitivity
                            }

                        }
                        bulkCopy.WriteToServer(destinationTable);
                        bulkCopy.Close();

回答by Dickster

I would go with the staging idea, however here is my approach to handling the case sensitive nature. Happy to be critiqued on my linq

我会采用分期的想法,但是这是我处理区分大小写性质的方法。很高兴在我的 linq 上受到批评

using (SqlConnection connection = new SqlConnection(conn_str))
{
        connection.Open();
        using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection))
        {
            bulkCopy.DestinationTableName = string.Format("[{0}].[{1}].[{2}]", targetDatabase, targetSchema, targetTable);
            var targetColumsAvailable = GetSchema(conn_str, targetTable).ToArray();
            foreach (var column in dt.Columns)
            {
                if (targetColumsAvailable.Select(x => x.ToUpper()).Contains(column.ToString().ToUpper()))
                {
                    var tc = targetColumsAvailable.Single(x => String.Equals(x, column.ToString(), StringComparison.CurrentCultureIgnoreCase));
                    bulkCopy.ColumnMappings.Add(column.ToString(), tc);
                }
            }

            // Write from the source to the destination.
            bulkCopy.WriteToServer(dt);
            bulkCopy.Close();
        }
}

and the helper method

和辅助方法

private static IEnumerable<string> GetSchema(string connectionString, string tableName)
        {



   using (SqlConnection connection = new SqlConnection(connectionString))
        using (SqlCommand command = connection.CreateCommand())
        {
            command.CommandText = "sp_Columns";
            command.CommandType = CommandType.StoredProcedure;

            command.Parameters.Add("@table_name", SqlDbType.NVarChar, 384).Value = tableName;

            connection.Open();
            using (var reader = command.ExecuteReader())
            {
                while (reader.Read())
                {
                    yield return (string)reader["column_name"];
                }
            }
        }
    }