java Spring Batch:使用动态文件名将数据写入多个文件
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/15974458/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Spring Batch: Writing data to multiple files with dynamic File Name
提问by forumuser1
I have a requirement to get the data from a database and write that data to files based on the filename given in the database.
我需要从数据库中获取数据并根据数据库中给定的文件名将该数据写入文件。
This is how data is defined in the database:
这是在数据库中定义数据的方式:
Columns --> FILE_NAME, REC_ID, NAME
Data --> file_1.csv, 1, ABC
Data --> file_1.csv, 2, BCD
Data --> file_1.csv, 3, DEF
Data --> file_2.csv, 4, FGH
Data --> file_2.csv, 5, DEF
Data --> file_3.csv, 6, FGH
Data --> file_3.csv, 7, DEF
Data --> file_4.csv, 8, FGH
As you see, basically the file names along with the data is defined in the Database so what SpringBatch should do is get this data and write it to the corresponding file specified in the Database (i.e., file_1.csv
should only contain 3 records (1,2,3), file_2.csv
should only contain records 4 and 5, etc.)
如您所见,数据库中基本上定义了文件名和数据,因此SpringBatch应该做的是获取这些数据并将其写入数据库中指定的相应文件中(即file_1.csv
应该只包含3条记录(1,2, 3),file_2.csv
应该只包含记录 4 和 5 等)
Is it possible to use MultiResourceItemWriter
for this requirement (please note that entire file name is dynamic and needs to be retrieved from Database).
是否可以MultiResourceItemWriter
用于此要求(请注意整个文件名是动态的,需要从数据库中检索)。
回答by user1121883
I'm not sure but I don't think there is an easy way of obtaining this. You could try to build your own ItemWriter like this:
我不确定,但我认为没有一种简单的方法可以获得它。您可以尝试像这样构建自己的 ItemWriter:
public class DynamicItemWriter implements ItemStream, ItemWriter<YourEntry> {
private Map<String, FlatFileItemWriter<YourEntry>> writers = new HashMap<>();
private LineAggregator<YourEntry> lineAggregator;
private ExecutionContext executionContext;
@Override
public void open(ExecutionContext executionContext) throws ItemStreamException {
this.executionContext = executionContext;
}
@Override
public void update(ExecutionContext executionContext) throws ItemStreamException {
}
@Override
public void close() throws ItemStreamException {
for(FlatFileItemWriter f:writers.values()){
f.close();
}
}
@Override
public void write(List<? extends YourEntry> items) throws Exception {
for (YourEntry item : items) {
FlatFileItemWriter<YourEntry> ffiw = getFlatFileItemWriter(item);
ffiw.write(Arrays.asList(item));
}
}
public LineAggregator<YourEntry> getLineAggregator() {
return lineAggregator;
}
public void setLineAggregator(LineAggregator<YourEntry> lineAggregator) {
this.lineAggregator = lineAggregator;
}
public FlatFileItemWriter<YourEntry> getFlatFileItemWriter(YourEntry item) {
String key = item.FileName();
FlatFileItemWriter<YourEntry> rr = writers.get(key);
if(rr == null){
rr = new FlatFileItemWriter<>();
rr.setLineAggregator(lineAggregator);
try {
UrlResource resource = new UrlResource("file:"+key);
rr.setResource(resource);
rr.open(executionContext);
} catch (MalformedURLException e) {
e.printStackTrace();
}
writers.put(key, rr);
//rr.afterPropertiesSet();
}
return rr;
}
}
and configure it as a writer:
并将其配置为编写器:
<bean id="csvWriter" class="com....DynamicItemWriter">
<property name="lineAggregator">
<bean
class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value=","/>
<property name="fieldExtractor" ref="csvFieldExtractor"/>
</bean>
</property>
回答by Rash
In spring-batch, you can do this using ClassifierCompositeItemWriter.
在 spring-batch 中,您可以使用ClassifierCompositeItemWriter执行此操作。
Since ClassifierCompositeItemWriter
gives you access to your objectduring write, you can write custom logic to instruct spring to write to different files.
由于在写入期间ClassifierCompositeItemWriter
可以访问您的对象,因此您可以编写自定义逻辑来指示 spring 写入不同的文件。
Take a look at below sample. The ClassifierCompositeItemWriter
needs an implementation of Classifier
interface. Below you can see that I have created a lambda where I am implementing the classify()
method of the Classifier
interface. The classify()
method is where you will create your ItemWriter
. In our example below, we have created a FlatFileItemWriter
which gets the name of the file from the item
itself and then creates a resource for that.
看看下面的示例。该ClassifierCompositeItemWriter
需要的实现Classifier
接口。您可以在下面看到我创建了一个 lambda,我在其中实现classify()
了Classifier
接口的方法。该classify()
方法是您创建ItemWriter
. 在我们下面的示例中,我们创建了 a FlatFileItemWriter
,它从item
自身获取文件的名称,然后为此创建一个资源。
@Bean
public ClassifierCompositeItemWriter<YourDataObject> yourDataObjectItemWriter(
Classifier<YourDataObject, ItemWriter<? super YourDataObject>> itemWriterClassifier
) {
ClassifierCompositeItemWriter<YourDataObject> compositeItemWriter = new ClassifierCompositeItemWriter<>();
compositeItemWriter.setClassifier(itemWriterClassifier);
return compositeItemWriter;
}
@Bean
public Classifier<YourDataObject, ItemWriter<? super YourDataObject>> itemWriterClassifier() {
return yourDataObject -> {
String fileName = yourDataObject.getFileName();
BeanWrapperFieldExtractor<YourDataObject> fieldExtractor = new BeanWrapperFieldExtractor<>();
fieldExtractor.setNames(new String[]{"recId", "name"});
DelimitedLineAggregator<YourDataObject> lineAggregator = new DelimitedLineAggregator<>();
lineAggregator.setFieldExtractor(fieldExtractor);
FlatFileItemWriter<YourDataObject> itemWriter = new FlatFileItemWriter<>();
itemWriter.setResource(new FileSystemResource(fileName));
itemWriter.setAppendAllowed(true);
itemWriter.setLineAggregator(lineAggregator);
itemWriter.setHeaderCallback(writer -> writer.write("REC_ID,NAME"));
itemWriter.open(new ExecutionContext());
return itemWriter;
};
}
Finally, you can attach your ClassifierCompositeItemWriter
in your batch step like you normally attach your ItemWriter.
最后,您可以ClassifierCompositeItemWriter
像通常附加 ItemWriter 一样附加您的批处理步骤。
@Bean
public Step myCustomStep(
StepBuilderFactory stepBuilderFactory
) {
return stepBuilderFactory.get("myCustomStep")
.<?, ?>chunk(1000)
.reader(myCustomReader())
.writer(yourDataObjectItemWriter(itemWriterClassifier(null)))
.build();
}
NOTE: As pointed out in comments by @Ping, a new writer will be created for each chunk, which is usually a bad practice and not an optimal solution. A better solution would be to maintain a hashmap of filename and writer so that you can reuse the writer.
注意:正如@Ping 在评论中指出的那样,将为每个块创建一个新的编写器,这通常是一种不好的做法,而不是最佳解决方案。更好的解决方案是维护文件名和编写器的哈希图,以便您可以重用编写器。