当前位置:  开发笔记 > 编程语言 > 正文

如何找到哪个Java/Scala线程锁定了文件?

如何解决《如何找到哪个Java/Scala线程锁定了文件?》经验,为你挑选了1个好方法。

简单来说:

    如何找到哪个Java/Scala线程锁定了文件?我知道JVM中的类/线程已经锁定了一个具体文件(重叠了一个文件区域),但我不知道如何.当我在断点处停止应用程序时,有可能找出正在执行此操作的类/线程吗?

以下代码抛出OverlappingFileLockException:

FileChannel.open(Paths.get("thisfile"), StandardOpenOption.APPEND).tryLock().isValid();
FileChannel.open(Paths.get("thisfile"), StandardOpenOption.APPEND).tryLock()..isShared();

    Java/Scala如何锁定此文件(Spark)?我知道如何使用java.nio.channels锁定文件,但我没有在Spark的github存储库中找到适当的调用.



关于我的问题的更多信息: 1.当我使用Hive在Windows操作系统中运行Spark时,它可以正常工作,但每次Spark关闭时,它都无法删除一个临时目录(在此之前的其他临时目录被正确删除)并输出以下异常:

2015-12-11 15:04:36 [Thread-13] INFO  org.apache.spark.SparkContext - Successfully stopped SparkContext
2015-12-11 15:04:36 [Thread-13] INFO  o.a.spark.util.ShutdownHookManager - Shutdown hook called
2015-12-11 15:04:36 [Thread-13] INFO  o.a.spark.util.ShutdownHookManager - Deleting directory C:\Users\MyUser\AppData\Local\Temp\spark-9d564520-5370-4834-9946-ac5af3954032
2015-12-11 15:04:36 [Thread-13] INFO  o.a.spark.util.ShutdownHookManager - Deleting directory C:\Users\MyUser\AppData\Local\Temp\spark-42b70530-30d2-41dc-aff5-8d01aba38041
2015-12-11 15:04:36 [Thread-13] ERROR o.a.spark.util.ShutdownHookManager - Exception while deleting Spark temp dir: C:\Users\MyUser\AppData\Local\Temp\spark-42b70530-30d2-41dc-aff5-8d01aba38041
java.io.IOException: Failed to delete: C:\Users\MyUser\AppData\Local\Temp\spark-42b70530-30d2-41dc-aff5-8d01aba38041
    at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:884) [spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.util.ShutdownHookManager$$anonfun$1$$anonfun$apply$mcV$sp$3.apply(ShutdownHookManager.scala:63) [spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.util.ShutdownHookManager$$anonfun$1$$anonfun$apply$mcV$sp$3.apply(ShutdownHookManager.scala:60) [spark-core_2.11-1.5.0.jar:1.5.0]
    at scala.collection.mutable.HashSet.foreach(HashSet.scala:78) [scala-library-2.11.6.jar:na]
    at org.apache.spark.util.ShutdownHookManager$$anonfun$1.apply$mcV$sp(ShutdownHookManager.scala:60) [spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:264) [spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:234) [spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:234) [spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:234) [spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1699) [spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:234) [spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:234) [spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:234) [spark-core_2.11-1.5.0.jar:1.5.0]
    at scala.util.Try$.apply(Try.scala:191) [scala-library-2.11.6.jar:na]
    at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:234) [spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:216) [spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54) [hadoop-common-2.4.1.jar:na]

我尝试在互联网上进行搜索,但发现Spark 正在进行中的问题(一个用户尝试做一些补丁,但是如果我正确地对这个拉取请求进行注释,它就无法正常工作)以及SO中的一些未解答的问题.

看起来问题出在Utils.scala类的deleteRecursively()方法中.我将断点设置为此方法并将其重写为Java:

public class Test {
    public static void deleteRecursively(File file) {
        if (file != null) {
            try {
                if (file.isDirectory()) {
                    for (File child : listFilesSafely(file)) {
                        deleteRecursively(child);
                    }
                    //ShutdownHookManager.removeShutdownDeleteDir(file)
                }
            } finally {
                if (!file.delete()) {
                    if (file.exists()) {
                        throw new RuntimeException("Failed to delete: " + file.getAbsolutePath());
                    }
                }
            }
        }
    }

    private static List listFilesSafely(File file) {
        if (file.exists()) {
            File[] files = file.listFiles();
            if (files == null) {
                throw new RuntimeException("Failed to list files for dir: " + file);
            }
            return Arrays.asList(files);
        } else {
            return Collections.emptyList();
        }
    }

    public static void main(String [] arg) {
        deleteRecursively(new File("C:\\Users\\MyUser\\AppData\\Local\\Temp\\spark-9ba0bb0c-1e20-455d-bc1f-86c696661ba3")); 
    }

当Spark在此方法的断点处停止时,我发现Spark的一个线程的JVM 锁定了"C:\ Users\MyUser\AppData\Local\Temp\spark-9ba0bb0c-1e20-455d-bc1f-86c696661ba3\metastore\db .lck"文件和Windows Process Explorer也显示Java锁定此文件.FileChannel也显示该文件在JVM中被锁定.

现在,我必须:

    找出哪个线程/类已锁定此文件

    找出锁定文件的方法Spark用于锁定"metastore\db.lck",它是什么类以及如何在关机前解锁它

    在调用deleteRecursively()方法之前对Spark或Hive执行拉取请求以解锁此文件("metastore\db.lck")或者至少留下关于问题的注释

如果您需要任何其他信息,请在评论中提问.



1> Basilevs..:

请参阅如何找出哪个线程在java中锁定文件?

Windows进程锁定了文件.线程可以打开文件来读写,但是一个包含文件句柄引用的类负责关闭它.因此,您应该寻找一个对象,而不是一个线程.

请参阅如何确定未固定对象的内容?找出方法.

推荐阅读
mobiledu2402851323
这个屌丝很懒,什么也没留下!
DevBox开发工具箱 | 专业的在线开发工具网站    京公网安备 11010802040832号  |  京ICP备19059560号-6
Copyright © 1998 - 2020 DevBox.CN. All Rights Reserved devBox.cn 开发工具箱 版权所有