Kann distcp kopieren verwendet werden, ein Verzeichnis der Dateien von S3, HDFS?

Frage ich mich, ob hadoop distcp verwendet werden, kopieren Sie mehrere Dateien auf einmal aus dem S3, HDFS. Es erscheint nur für einzelne Dateien mit absoluten Pfaden. Ich möchte zu kopieren-entweder einen ganzen Ordner, oder verwenden Sie einen Platzhalter.

Finden Sie unter: Hadoop DistCp mit wildcards?

Ich bin mir bewusst,s3distcp, aber ich würde es vorziehen, zu verwenden distcp für Einfachheit.

Hier war mein Versuch, das kopieren eines Verzeichnisses von S3, HDFS:

[root@ip-10-147-167-56 ~]# /root/ephemeral-hdfs/bin/hadoop distcp s3n://<key>:<secret>@mybucket/dir hdfs:///input/
13/05/23 19:58:27 INFO tools.DistCp: srcPaths=[s3n://<key>:<secret>@mybucket/dir]
13/05/23 19:58:27 INFO tools.DistCp: destPath=hdfs:/input
13/05/23 19:58:29 INFO tools.DistCp: sourcePathsCount=4
13/05/23 19:58:29 INFO tools.DistCp: filesToCopyCount=3
13/05/23 19:58:29 INFO tools.DistCp: bytesToCopyCount=87.0
13/05/23 19:58:29 INFO mapred.JobClient: Running job: job_201305231521_0005
13/05/23 19:58:30 INFO mapred.JobClient:  map 0% reduce 0%
13/05/23 19:58:45 INFO mapred.JobClient: Task Id : attempt_201305231521_0005_m_000000_0, Status : FAILED
java.lang.NullPointerException
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106)
    at java.io.BufferedInputStream.close(BufferedInputStream.java:468)
    at java.io.FilterInputStream.close(FilterInputStream.java:172)
    at org.apache.hadoop.tools.DistCp.checkAndClose(DistCp.java:1386)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:434)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:547)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:314)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

13/05/23 19:58:55 INFO mapred.JobClient: Task Id : attempt_201305231521_0005_m_000000_1, Status : FAILED
java.lang.NullPointerException
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106)
    at java.io.BufferedInputStream.close(BufferedInputStream.java:468)
    at java.io.FilterInputStream.close(FilterInputStream.java:172)
    at org.apache.hadoop.tools.DistCp.checkAndClose(DistCp.java:1386)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:434)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:547)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:314)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

13/05/23 19:59:04 INFO mapred.JobClient: Task Id : attempt_201305231521_0005_m_000000_2, Status : FAILED
java.lang.NullPointerException
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106)
    at java.io.BufferedInputStream.close(BufferedInputStream.java:468)
    at java.io.FilterInputStream.close(FilterInputStream.java:172)
    at org.apache.hadoop.tools.DistCp.checkAndClose(DistCp.java:1386)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:434)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:547)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:314)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

13/05/23 19:59:18 INFO mapred.JobClient: Job complete: job_201305231521_0005
13/05/23 19:59:18 INFO mapred.JobClient: Counters: 6
13/05/23 19:59:18 INFO mapred.JobClient:   Job Counters 
13/05/23 19:59:18 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=38319
13/05/23 19:59:18 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/05/23 19:59:18 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/05/23 19:59:18 INFO mapred.JobClient:     Launched map tasks=4
13/05/23 19:59:18 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
13/05/23 19:59:18 INFO mapred.JobClient:     Failed map tasks=1
13/05/23 19:59:18 INFO mapred.JobClient: Job Failed: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201305231521_0005_m_000000
With failures, global counters are inaccurate; consider running with -i
Copy failed: java.io.IOException: Job failed!
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
    at org.apache.hadoop.tools.DistCp.copy(DistCp.java:667)
    at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)
Welche version von Hadoop verwenden Sie? Können Sie sagen, ob Sie sich eine NPE auch bei der Verwendung von hadoop fs -cp ?
Hadoop 1.0.4. Wenn ich versuche, fs -cp, es "funktioniert" aber bleibt hängen in einer Endlosschleife. Wenn ich fs -ls kann ich sehen, dass es wurde die Schaffung von unendlich verschachtelten dir/dir/dir/dir/dir/... Verzeichnisse. Komisch.
Edit: fs -cp funktioniert (ohne irgendwelche seltsamen Verhalten), wenn ich eine wildcard.

InformationsquelleAutor zzz | 2013-05-23

Schreibe einen Kommentar