当前位置:  开发笔记 > 编程语言 > 正文

Hadoop:java.lang.IncompatibleClassChangeError:找到接口org.apache.hadoop.mapreduce.JobContext,但是类是预期的

如何解决《Hadoop:java.lang.IncompatibleClassChangeError:找到接口org.apache.hadoop.mapreduce.JobContext,但是类是预期的》经验,为你挑选了1个好方法。

在Eclipse中组装时,我的MapReduce作业运行正常,Eclipse项目中包含的所有可能的Hadoop和Hive jar都作为依赖项.(这些是单节点,本地Hadoop安装附带的jar).

然而,当尝试运行使用Maven项目组装的相同程序时(见下文),我得到:

 Exception in thread "main" java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected

使用以下Maven项目汇编程序时会发生此异常:


  4.0.0

  com.bigdata.hadoop
  FieldCounts
  0.0.1-SNAPSHOT
  jar

  FieldCounts
  http://maven.apache.org

  
    UTF-8
  

  
    
      junit
      junit
      3.8.1
      test
    
     
        org.apache.hadoop
        hadoop-hdfs
        2.2.0
    
    
        org.apache.hadoop
        hadoop-common
        2.2.0
    

    org.apache.hadoop
    hadoop-mapreduce-client-jobclient
    2.2.0


    org.apache.hive.hcatalog
    hcatalog-core
    0.12.0


    com.google.guava
    guava
    16.0.1

       
    
    
      
        org.apache.maven.plugins
        maven-compiler-plugin
        2.3.2
        
            ${jdk.version}
            ${jdk.version}
        
             
  
  org.apache.maven.plugins
  maven-assembly-plugin
    
      
         
           attached
         
         package
         
           
             jar-with-dependencies
          
          
            
              com.bigdata.hadoop.FieldCounts
            
          
        
     
  

 
        

*请告知在哪里以及如何找到兼容的Hadoop罐子?*

[update_1] 我正在运行Hadoop 2.2.0.2.0.6.0-101

正如我在这里找到的:https://github.com/kevinweil/elephant-bird/issues/247

Hadoop 1.0.3:JobContext是一个类

Hadoop 2.0.0:JobContext是一个接口

在我的pom.xml中,我有三个版本为2.2.0的罐子

hadoop-hdfs 2.2.0
hadoop-common 2.2.0
hadoop-mapreduce-client-jobclient 2.2.0
hcatalog-core 0.12.0

唯一的例外是hcatalog-core哪个版本是0.12.0,我找不到这个jar的更新版本,我需要它!

我怎样才能找到这4个罐子中的哪个产生java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected

请给我一个如何解决这个问题的想法.(我看到的唯一解决方案是从源代码编译所有内容!)

[/ update_1]

我的MarReduce工作全文:

package com.bigdata.hadoop;

import java.io.IOException;
import java.util.*;

import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.util.*;
import org.apache.hcatalog.mapreduce.*;
import org.apache.hcatalog.data.*;
import org.apache.hcatalog.data.schema.*;
import org.apache.log4j.Logger;

public class FieldCounts extends Configured implements Tool {

    public static class Map extends Mapper {

        static Logger logger = Logger.getLogger("com.foo.Bar");

        static boolean firstMapRun = true;
        static List fieldNameList = new LinkedList();
        /**
         * Return a list of field names not containing `id` field name
         * @param schema
         * @return
         */
        static List getFieldNames(HCatSchema schema) {
            // Filter out `id` name just once
            if (firstMapRun) {
                firstMapRun = false;
                List fieldNames = schema.getFieldNames();
                for (String fieldName : fieldNames) {
                    if (!fieldName.equals("id")) {
                        fieldNameList.add(fieldName);
                    }
                }
            } // if (firstMapRun)
            return fieldNameList;
        }

        @Override
      protected void map( WritableComparable key,
                          HCatRecord hcatRecord,
                          //org.apache.hadoop.mapreduce.Mapper
                          //.Context context)
                          Context context)
            throws IOException, InterruptedException {

            HCatSchema schema = HCatBaseInputFormat.getTableSchema(context.getConfiguration());

           //String schemaTypeStr = schema.getSchemaAsTypeString();
           //logger.info("******** schemaTypeStr ********** : "+schemaTypeStr);

           //List fieldNames = schema.getFieldNames();
            List fieldNames = getFieldNames(schema);
            for (String fieldName : fieldNames) {
                Object value = hcatRecord.get(fieldName, schema);
                String fieldValue = null;
                if (null == value) {
                    fieldValue = "";
                } else {
                    fieldValue = value.toString();
                }
                //String fieldNameValue = fieldName+"."+fieldValue;
                //context.write(new Text(fieldNameValue), new IntWritable(1));
                TableFieldValueKey fieldKey = new TableFieldValueKey();
                fieldKey.fieldName = fieldName;
                fieldKey.fieldValue = fieldValue;
                context.write(fieldKey, new IntWritable(1));
            }

        }       
    }

    public static class Reduce extends Reducer {

        protected void reduce( TableFieldValueKey key,
                               java.lang.Iterable values,
                               Context context)
                               //org.apache.hadoop.mapreduce.Reducer.Context context)
            throws IOException, InterruptedException {
            Iterator iter = values.iterator();
            int sum = 0;
            // Sum up occurrences of the given key 
            while (iter.hasNext()) {
                IntWritable iw = iter.next();
                sum = sum + iw.get();
            }

            HCatRecord record = new DefaultHCatRecord(3);
            record.set(0, key.fieldName);
            record.set(1, key.fieldValue);
            record.set(2, sum);

            context.write(null, record);
        }
    }

    public int run(String[] args) throws Exception {
        Configuration conf = getConf();
        args = new GenericOptionsParser(conf, args).getRemainingArgs();

        // To fix Hadoop "META-INFO" (http://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file)
        conf.set("fs.hdfs.impl",
                org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
        conf.set("fs.file.impl",
                org.apache.hadoop.fs.LocalFileSystem.class.getName());

        // Get the input and output table names as arguments
        String inputTableName = args[0];
        String outputTableName = args[1];
        // Assume the default database
        String dbName = null;

        Job job = new Job(conf, "FieldCounts");

        HCatInputFormat.setInput(job,
                InputJobInfo.create(dbName, inputTableName, null));
        job.setJarByClass(FieldCounts.class);
        job.setMapperClass(Map.class);
        job.setReducerClass(Reduce.class);

        // An HCatalog record as input
        job.setInputFormatClass(HCatInputFormat.class);

        // Mapper emits TableFieldValueKey as key and an integer as value
        job.setMapOutputKeyClass(TableFieldValueKey.class);
        job.setMapOutputValueClass(IntWritable.class);

        // Ignore the key for the reducer output; emitting an HCatalog record as
        // value
        job.setOutputKeyClass(WritableComparable.class);
        job.setOutputValueClass(DefaultHCatRecord.class);
        job.setOutputFormatClass(HCatOutputFormat.class);

        HCatOutputFormat.setOutput(job,
                OutputJobInfo.create(dbName, outputTableName, null));
        HCatSchema s = HCatOutputFormat.getTableSchema(job);
        System.err.println("INFO: output schema explicitly set for writing:"
                + s);
        HCatOutputFormat.setSchema(job, s);
        return (job.waitForCompletion(true) ? 0 : 1);
    }

    public static void main(String[] args) throws Exception {
        String classpath = System.getProperty("java.class.path");
        //System.out.println("*** CLASSPATH: "+classpath);       
        int exitCode = ToolRunner.run(new FieldCounts(), args);
        System.exit(exitCode);
    }
}

复杂键的类:

package com.bigdata.hadoop;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

import org.apache.hadoop.io.WritableComparable;

import com.google.common.collect.ComparisonChain;

public class TableFieldValueKey  implements WritableComparable {

      public String fieldName;
      public String fieldValue;

      public TableFieldValueKey() {} //must have a default constructor
      //

      public void readFields(DataInput in) throws IOException {
        fieldName = in.readUTF();
        fieldValue = in.readUTF();
      }

      public void write(DataOutput out) throws IOException {
        out.writeUTF(fieldName);
        out.writeUTF(fieldValue);
      }

      public int compareTo(TableFieldValueKey o) {
        return ComparisonChain.start().compare(fieldName, o.fieldName)
            .compare(fieldValue, o.fieldValue).result();
      }

    }

SachinJ.. 11

Hadoop的已经经历了巨大的代码重构了从Hadoop 1.0Hadoop 2.0.一个副作用是针对Hadoop 1.0编译的代码与Hadoop 2.0不兼容,反之亦然.但是源代码大部分是兼容的,因此只需要使用目标Hadoop发行版重新编译代码.

Found interface X, but class was expected当您运行在Hadoop 2.0上为Hadoop 1.0编译的代码时,异常" "非常常见,反之亦然.

您可以在集群中找到正确的hadoop版本,然后在pom.xml文件中指定hadoop版本使用集群中使用的相同版本的hadoop构建项目并进行部署.



1> SachinJ..:

Hadoop的已经经历了巨大的代码重构了从Hadoop 1.0Hadoop 2.0.一个副作用是针对Hadoop 1.0编译的代码与Hadoop 2.0不兼容,反之亦然.但是源代码大部分是兼容的,因此只需要使用目标Hadoop发行版重新编译代码.

Found interface X, but class was expected当您运行在Hadoop 2.0上为Hadoop 1.0编译的代码时,异常" "非常常见,反之亦然.

您可以在集群中找到正确的hadoop版本,然后在pom.xml文件中指定hadoop版本使用集群中使用的相同版本的hadoop构建项目并进行部署.

推荐阅读
勤奋的瞌睡猪_715
这个屌丝很懒,什么也没留下!
DevBox开发工具箱 | 专业的在线开发工具网站    京公网安备 11010802040832号  |  京ICP备19059560号-6
Copyright © 1998 - 2020 DevBox.CN. All Rights Reserved devBox.cn 开发工具箱 版权所有