我是Scala的新手.如何使用Scala从HDFS读取文件(不使用Spark)?当我用谷歌搜索它时,我只找到了HDFS的写入选项.
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import java.io.PrintWriter; /** * @author ${user.name} */ object App { //def foo(x : Array[String]) = x.foldLeft("")((a,b) => a + b) def main(args : Array[String]) { println( "Trying to write to HDFS..." ) val conf = new Configuration() //conf.set("fs.defaultFS", "hdfs://quickstart.cloudera:8020") conf.set("fs.defaultFS", "hdfs://192.168.30.147:8020") val fs= FileSystem.get(conf) val output = fs.create(new Path("/tmp/mySample.txt")) val writer = new PrintWriter(output) try { writer.write("this is a test") writer.write("\n") } finally { writer.close() println("Closed!") } println("Done!") } }
请帮帮我.如何使用scala从HDFS读取文件或加载文件.
其中一种方式(功能风格)可能是这样的:
val hdfs = FileSystem.get(new URI("hdfs://yourUrl:port/"), new Configuration()) val path = new Path("/path/to/file/") val stream = hdfs.open(path) def readLines = Stream.cons(stream.readLine, Stream.continually( stream.readLine)) //This example checks line for null and prints every existing line consequentally readLines.takeWhile(_ != null).foreach(line => println(line))
你也可以看一下这篇文章,或者这里和这里,这些问题看起来与你的相关,并且如果你感兴趣的话,还包含工作(但更像Java的)代码示例.