import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; public class TestHDFS { public static void main(String[] args) throws Exception{ Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://192.168.0.104:9000"); FileSystem fs = FileSystem.get(conf); //存在的情况下会覆盖之前的目录 boolean success = fs.mkdirs(new Path("/xiaol")); System.out.println(success); } }
Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=xiaol, access=WRITE, inode="/xiaol":root:supergroup:drwxr-xr-x
网上的方法:
1.在hdfs的配置文件中,将dfs.permissions.enabled修改为False
2.hadoop fs -chmod 777 /
我觉得这俩方法都是屎
hadoop在访问hdfs的时候会进行权限认证,取用户名的过程是这样的:
读取HADOOP_USER_NAME系统环境变量,如果不为空,那么拿它作username,如果为空
读取HADOOP_USER_NAME这个java环境变量,如果为空
从com.sun.security.auth.NTUserPrincipal或者com.sun.security.auth.UnixPrincipal的实例获取username。
如果以上尝试都失败,那么抛出异常LoginException("Can’t find user name")
解决方案:
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import java.util.Properties; public class TestHDFS { public static void main(String[] args) throws Exception{ Properties properties = System.getProperties(); properties.setProperty("HADOOP_USER_NAME", "root"); Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://192.168.0.104:9000"); FileSystem fs = FileSystem.get(conf); //存在的情况下会覆盖之前的目录 boolean success = fs.mkdirs(new Path("/xiaol")); System.out.println(success); } }