[Lustre-discuss] Need help on lustre..

Colin Faber colin_faber at xyratex.com
Sun Feb 17 12:25:21 PST 2013


Hi,

I think you might be better served with your Hadoop setup by posting to 
the Hadoop discussion list. Once you have it setup and working, if you 
run into Lustre related issues, please feel free to post those here.

Good luck!

-cf

On 02/17/2013 04:47 AM, linux freaker wrote:
> Great !!! I tried removing entry from mapred-site.xml and it seems to run well.
>
> Here are the logs now:
>
> [code]
> [root at alpha hadoop]# bin/hadoop jar hadoop-examples-1.1.1.jar
> wordcount /user/ha
>                                     doop/hadoop/
> /user/hadoop/hadoop/output
> 13/02/17 17:14:37 INFO util.NativeCodeLoader: Loaded the native-hadoop library
> 13/02/17 17:14:38 INFO input.FileInputFormat: Total input paths to process : 1
> 13/02/17 17:14:38 WARN snappy.LoadSnappy: Snappy native library not loaded
> 13/02/17 17:14:38 INFO mapred.JobClient: Running job: job_local_0001
> 13/02/17 17:14:38 INFO util.ProcessTree: setsid exited with exit code 0
> 13/02/17 17:14:38 INFO mapred.Task:  Using ResourceCalculatorPlugin :
> org.apache
>
> .hadoop.util.LinuxResourceCalculatorPlugin at 2f74219d
> 13/02/17 17:14:38 INFO mapred.MapTask: io.sort.mb = 100
> 13/02/17 17:14:38 INFO mapred.MapTask: data buffer = 79691776/99614720
> 13/02/17 17:14:38 INFO mapred.MapTask: record buffer = 262144/327680
> 13/02/17 17:14:38 INFO mapred.MapTask: Starting flush of map output
> 13/02/17 17:14:39 INFO mapred.JobClient:  map 0% reduce 0%
> 13/02/17 17:14:39 INFO mapred.MapTask: Finished spill 0
> 13/02/17 17:14:39 INFO mapred.Task: Task:attempt_local_0001_m_000000_0
> is done.
>                            And is in the process of commiting
> 13/02/17 17:14:39 INFO mapred.LocalJobRunner:
> 13/02/17 17:14:39 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
> 13/02/17 17:14:39 INFO mapred.Task:  Using ResourceCalculatorPlugin :
> org.apache
>
> .hadoop.util.LinuxResourceCalculatorPlugin at 6d79953c
> 13/02/17 17:14:39 INFO mapred.LocalJobRunner:
> 13/02/17 17:14:39 INFO mapred.Merger: Merging 1 sorted segments
> 13/02/17 17:14:39 INFO mapred.Merger: Down to the last merge-pass,
> with 1 segmen
>                                ts left of total size: 79496 bytes
> 13/02/17 17:14:39 INFO mapred.LocalJobRunner:
> 13/02/17 17:14:39 INFO mapred.Task: Task:attempt_local_0001_r_000000_0
> is done.
>                            And is in the process of commiting
> 13/02/17 17:14:39 INFO mapred.LocalJobRunner:
> 13/02/17 17:14:39 INFO mapred.Task: Task attempt_local_0001_r_000000_0
> is allowe
>                            d to commit now
> 13/02/17 17:14:39 INFO output.FileOutputCommitter: Saved output of
> task 'attempt
>                                _local_0001_r_000000_0' to
> /user/hadoop/hadoop/output
> 13/02/17 17:14:39 INFO mapred.LocalJobRunner: reduce > reduce
> 13/02/17 17:14:39 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
> 13/02/17 17:14:40 INFO mapred.JobClient:  map 100% reduce 100%
> 13/02/17 17:14:40 INFO mapred.JobClient: Job complete: job_local_0001
> 13/02/17 17:14:40 INFO mapred.JobClient: Counters: 20
> 13/02/17 17:14:40 INFO mapred.JobClient:   File Output Format Counters
> 13/02/17 17:14:40 INFO mapred.JobClient:     Bytes Written=57885
> 13/02/17 17:14:40 INFO mapred.JobClient:   FileSystemCounters
> 13/02/17 17:14:40 INFO mapred.JobClient:     FILE_BYTES_READ=643420
> 13/02/17 17:14:40 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=574349
> 13/02/17 17:14:40 INFO mapred.JobClient:   File Input Format Counters
> 13/02/17 17:14:40 INFO mapred.JobClient:     Bytes Read=139351
> 13/02/17 17:14:40 INFO mapred.JobClient:   Map-Reduce Framework
> 13/02/17 17:14:40 INFO mapred.JobClient:     Map output materialized bytes=79500
> 13/02/17 17:14:40 INFO mapred.JobClient:     Map input records=2932
> 13/02/17 17:14:40 INFO mapred.JobClient:     Reduce shuffle bytes=0
> 13/02/17 17:14:40 INFO mapred.JobClient:     Spilled Records=11180
> 13/02/17 17:14:40 INFO mapred.JobClient:     Map output bytes=212823
> 13/02/17 17:14:40 INFO mapred.JobClient:     Total committed heap
> usage (bytes)=
>                                 500432896
> 13/02/17 17:14:40 INFO mapred.JobClient:     CPU time spent (ms)=0
> 13/02/17 17:14:40 INFO mapred.JobClient:     SPLIT_RAW_BYTES=99
> 13/02/17 17:14:40 INFO mapred.JobClient:     Combine input records=21582
> 13/02/17 17:14:40 INFO mapred.JobClient:     Reduce input records=5590
> 13/02/17 17:14:40 INFO mapred.JobClient:     Reduce input groups=5590
> 13/02/17 17:14:40 INFO mapred.JobClient:     Combine output records=5590
> 13/02/17 17:14:40 INFO mapred.JobClient:     Physical memory (bytes) snapshot=0
> 13/02/17 17:14:40 INFO mapred.JobClient:     Reduce output records=5590
> 13/02/17 17:14:40 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
> 13/02/17 17:14:40 INFO mapred.JobClient:     Map output records=21582
> [root at alpha hadoop]#
>
> [/code]
>
> Does it mean hadoop over lustre is working fine?
>
> On 2/17/13, linux freaker <linuxfreaker at gmail.com> wrote:
>> I tried running the below command but got the below error.
>> I have not put it into HDFS since Lustre is what I am trying to implement
>> with.
>>
>> [code]
>> #bin/hadoop jar hadoop-examples-1.1.1.jar wordcount
>> /user/hadoop/hadoop /user/hadoop-output
>>
>> 13/02/17 17:02:50 INFO util.NativeCodeLoader: Loaded the native-hadoop
>> library
>> 13/02/17 17:02:50 INFO input.FileInputFormat: Total input paths to process :
>> 1
>> 13/02/17 17:02:50 WARN snappy.LoadSnappy: Snappy native library not loaded
>> 13/02/17 17:02:50 INFO mapred.JobClient: Cleaning up the staging area
>> file:/tmp/
>>
>> hadoop-hadoop/mapred/staging/root/.staging/job_201302161113_0004
>> 13/02/17 17:02:50 ERROR security.UserGroupInformation:
>> PriviledgedActionExceptio
>>                                            n as:root
>> cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException: java
>>
>>                  .io.FileNotFoundException: File
>> file:/tmp/hadoop-hadoop/mapred/staging/root/.sta
>>
>> ging/job_201302161113_0004/job.xml does not exist.
>>          at
>> org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3731)
>>          at
>> org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3695)
>>          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>          at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
>>
>>                  java:57)
>>          at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
>>
>>                  sorImpl.java:43)
>>          at java.lang.reflect.Method.invoke(Method.java:616)
>>          at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
>>          at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
>>          at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
>>          at java.security.AccessController.doPrivileged(Native Method)
>>          at javax.security.auth.Subject.doAs(Subject.java:416)
>>          at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
>>
>>                  tion.java:1136)
>>          at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>> Caused by: java.io.FileNotFoundException: File
>> file:/tmp/hadoop-hadoop/mapred/st
>>
>> aging/root/.staging/job_201302161113_0004/job.xml does not exist.
>>          at
>> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSys
>>
>>                  tem.java:397)
>>          at
>> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.
>>
>>                  java:251)
>>          at
>> org.apache.hadoop.mapred.JobInProgress.<init>(JobInProgress.java:406)
>>          at
>> org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3729)
>>          ... 12 more
>>
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> java.io.FileNotFound
>>                                       Exception: File
>> file:/tmp/hadoop-hadoop/mapred/staging/root/.staging/job_2013021
>>
>>            61113_0004/job.xml does not exist.
>>          at
>> org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3731)
>>          at
>> org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3695)
>>          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>          at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
>>
>>                  java:57)
>>          at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
>>
>>                  sorImpl.java:43)
>>          at java.lang.reflect.Method.invoke(Method.java:616)
>>          at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
>>          at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
>>          at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
>>          at java.security.AccessController.doPrivileged(Native Method)
>>          at javax.security.auth.Subject.doAs(Subject.java:416)
>>          at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
>>
>>                  tion.java:1136)
>>          at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>> Caused by: java.io.FileNotFoundException: File
>> file:/tmp/hadoop-hadoop/mapred/st
>>
>> aging/root/.staging/job_201302161113_0004/job.xml does not exist.
>>          at
>> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSys
>>
>>                  tem.java:397)
>>          at
>> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.
>>
>>                  java:251)
>>          at
>> org.apache.hadoop.mapred.JobInProgress.<init>(JobInProgress.java:406)
>>          at
>> org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3729)
>>          ... 12 more
>>
>>          at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>          at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>>          at org.apache.hadoop.mapred.$Proxy1.submitJob(Unknown Source)
>>          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>          at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
>>
>>                  java:57)
>>          at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
>>
>>                  sorImpl.java:43)
>>          at java.lang.reflect.Method.invoke(Method.java:616)
>>          at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryI
>>
>>                  nvocationHandler.java:85)
>>          at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocat
>>
>>                  ionHandler.java:62)
>>          at org.apache.hadoop.mapred.$Proxy1.submitJob(Unknown Source)
>>          at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>          at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>>          at java.security.AccessController.doPrivileged(Native Method)
>>          at javax.security.auth.Subject.doAs(Subject.java:416)
>>          at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
>>
>>                  tion.java:1136)
>>          at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:9
>>
>>                  12)
>>          at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)
>>          at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>          at org.apache.hadoop.examples.WordCount.main(WordCount.java:67)
>>          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>          at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
>>
>>                  java:57)
>>          at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
>>
>>                  sorImpl.java:43)
>>          at java.lang.reflect.Method.invoke(Method.java:616)
>>          at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(Progra
>>
>>                  mDriver.java:68)
>>          at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>          at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>          at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
>>
>>                  java:57)
>>          at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
>>
>>                  sorImpl.java:43)
>>          at java.lang.reflect.Method.invoke(Method.java:616)
>>          at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> [root at alpha hadoop]#
>>
>> [/code]
>>
>> On 2/17/13, linux freaker <linuxfreaker at gmail.com> wrote:
>>> Hello,
>>>
>>> I have 4 machines - 1 MDS, 1 OSS, 2 Linux client. I need to run Hadoop
>>> over lustre replacing HDFS. All I have put the setup detail under
>>> http://paste.ubuntu.com/1661235/
>>>
>>> All I need to know is what I really need for Hadoop, what
>>> configuration changes are needed?
>>>   Please suggest.
>>>
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss at lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss




More information about the lustre-discuss mailing list