• ambari安装hdp时,新建的ambari-hdp-1.repo中baseurl无值


    参考链接:

    https://community.cloudera.com/t5/Support-Questions/HDP-3-0-with-local-repository-failing-to-deploy/td-p/240954

    https://community.cloudera.com/t5/Community-Articles/ambari-2-7-3-Ambari-writes-Empty-baseurl-values-written-to/ta-p/249314

    错误如下:

    stderr: 
    Traceback (most recent call last):
      File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
        BeforeInstallHook().execute()
      File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
        method(env)
      File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 33, in hook
        install_packages()
      File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/shared_initialization.py", line 37, in install_packages
        retry_count=params.agent_stack_retry_count)
      File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
        self.env.run()
      File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
        self.run_action(resource, action)
      File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
        provider_action()
      File "/usr/lib/ambari-agent/lib/resource_management/core/providers/packaging.py", line 30, in action_install
        self._pkg_manager.install_package(package_name, self.__create_context())
      File "/usr/lib/ambari-agent/lib/ambari_commons/repo_manager/yum_manager.py", line 219, in install_package
        shell.repository_manager_executor(cmd, self.properties, context)
      File "/usr/lib/ambari-agent/lib/ambari_commons/shell.py", line 753, in repository_manager_executor
        raise RuntimeError(message)
    RuntimeError: Failed to execute command '/usr/bin/yum -y install hdp-select', exited with code '1', message: 'Repository InstallMedia is listed more than once in the configuration
    
     One of the configured repositories failed (Unknown),
     and yum doesn't have enough cached data to continue. At this point the only
     safe thing yum can do is fail. There are a few ways to work "fix" this:
    
         1. Contact the upstream for the repository and get them to fix the problem.
    
        2. Reconfigure the baseurl/etc. for the repository, to point to a working
    
           upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work).
    
         3. Run the command with the repository temporarily disabled
    
                yum --disablerepo=<repoid> ...
    
         4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage:
    
    
    
    
    
    
                yum-config-manager --disable <repoid>
    
    
            or
    
    
                subscription-manager repos --disable=<repoid>
    
    
    
    
    
    
         5. Configure the failing repository to be skipped, if it is unavailable.
    
    
            Note that yum will try to contact the repo. when it runs most commands,
    
    
            so will have to try and fail each time (and thus. yum will be be much
    
    
            slower). If it is a very temporary problem though, this is often a nice
    
    
            compromise:
    
    
    
    
    
    
                yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
    
    
    
    
    
    
    Cannot find a valid baseurl for repo: HDP-3.1-repo-1
    '
    Command aborted. Reason: 'Server considered task failed and automatically aborted it'
     stdout:
    2018-12-21 11:47:09,204 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=None -> 3.1
    2018-12-21 11:47:09,213 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
    2018-12-21 11:47:09,216 - Group['hdfs'] {}
    2018-12-21 11:47:09,218 - Group['hadoop'] {}
    2018-12-21 11:47:09,218 - Group['users'] {}
    2018-12-21 11:47:09,219 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
    2018-12-21 11:47:09,221 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
    2018-12-21 11:47:09,222 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
    2018-12-21 11:47:09,223 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
    2018-12-21 11:47:09,224 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
    2018-12-21 11:47:09,226 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
    2018-12-21 11:47:09,227 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
    2018-12-21 11:47:09,228 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
    2018-12-21 11:47:09,230 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
    2018-12-21 11:47:09,235 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
    2018-12-21 11:47:09,236 - Group['hdfs'] {}
    2018-12-21 11:47:09,236 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
    2018-12-21 11:47:09,237 - FS Type: HDFS
    2018-12-21 11:47:09,237 - Directory['/etc/hadoop'] {'mode': 0755}
    2018-12-21 11:47:09,237 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
    2018-12-21 11:47:09,238 - Changing owner for /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir from 0 to hdfs
    2018-12-21 11:47:09,258 - Repository['HDP-3.1-repo-2'] {'base_url': 'http://ambari.hdp.local/hdp/HDP/centos7/3.1.0.0-78/', 'action': ['prepare'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]
    name={{repo_id}}
    {% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}
    
    path=/
    enabled=1
    gpgcheck=0', 'repo_file_name': 'ambari-hdp-2', 'mirror_list': None}
    2018-12-21 11:47:09,268 - Repository['HDP-UTILS-1.1.0.22-repo-2'] {'base_url': 'http://ambari.hdp.local/hdp/HDP-UTILS/centos7/1.1.0.22', 'action': ['prepare'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]
    name={{repo_id}}
    {% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}
    
    path=/
    enabled=1
    gpgcheck=0', 'repo_file_name': 'ambari-hdp-2', 'mirror_list': None}
    2018-12-21 11:47:09,272 - Repository[None] {'action': ['create']}
    2018-12-21 11:47:09,273 - File['/tmp/tmpYbCIof'] {'content': '[HDP-3.1-repo-2]
    name=HDP-3.1-repo-2
    baseurl=http://ambari.hdp.local/hdp/HDP/centos7/3.1.0.0-78/
    
    path=/
    enabled=1
    gpgcheck=0
    [HDP-UTILS-1.1.0.22-repo-2]
    name=HDP-UTILS-1.1.0.22-repo-2
    baseurl=http://ambari.hdp.local/hdp/HDP-UTILS/centos7/1.1.0.22
    
    path=/
    enabled=1
    gpgcheck=0'}
    2018-12-21 11:47:09,274 - Writing File['/tmp/tmpYbCIof'] because contents don't match
    2018-12-21 11:47:09,274 - Rewriting /etc/yum.repos.d/ambari-hdp-2.repo since it has changed.
    2018-12-21 11:47:09,274 - File['/etc/yum.repos.d/ambari-hdp-2.repo'] {'content': StaticFile('/tmp/tmpYbCIof')}
    2018-12-21 11:47:09,276 - Writing File['/etc/yum.repos.d/ambari-hdp-2.repo'] because it doesn't exist
    2018-12-21 11:47:09,276 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
    2018-12-21 11:47:09,626 - Skipping installation of existing package unzip
    2018-12-21 11:47:09,627 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
    2018-12-21 11:47:09,844 - Skipping installation of existing package curl
    2018-12-21 11:47:09,844 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
    2018-12-21 11:47:10,062 - Installing package hdp-select ('/usr/bin/yum -y install hdp-select')
    2018-12-21 11:47:10,404 - Skipping stack-select on SMARTSENSE because it does not exist in the stack-select package structure.
    Command aborted. Reason: 'Server considered task failed and automatically aborted it'
    
    
    Command failed after 1 tri

    ambari-hdp-1.repo 内容如下

    [HDP-3.1-repo-1]
    name=HDP-3.1-repo-1
    baseurl=
    
    
    path=/
    enabled=1
    gpgcheck=0
    [HDP-UTILS-1.1.0.22-repo-1]
    name=HDP-UTILS-1.1.0.22-repo-1
    baseurl=
    
    
    path=/
    enabled=1
    gpgcheck=0

    解决方案:

    检查

    Could you please check the repos are correct in host where the installation is failing
    
    # grep 'baseurl' /etc/yum.repos.d/* | grep -i HDP
    Try cleaning the yum cache by running the command.
    
    # yum clean all
    Please check if in case of any multiple "ambari-hdp-<repoid>.repo" files present inside the "/etc/yum.repos.d/" . If so, then move the unwanted files from there to back up folder.
    
    Please try the below commands from the host where it is failing to install "hdp-select" package
    
    # yum install hdp-select -y

    解决

    Root cause : https://issues.apache.org/jira/browse/AMBARI-25069
    
    Workaround :
    
    This is a Javascript bug in ambari that happens when using local repository and there is no internet access to cluster
    
    to workaround this
    
    Steps
    
    1) go to /usr/lib/ambari-server/web/javascipts
    
    cd /usr/lib/ambari-server/web/javascripts
    2) take backup of app.js
    
     cp app.js app.js_backup
    3) edit the app.js
    
    find out the line(39892) : onNetworkIssuesExist: function () {
    
    Change the line from :
    
      /**
       * Use Local Repo if some network issues exist
       */
      onNetworkIssuesExist: function () {
        if (this.get('networkIssuesExist')) {
          this.get('content.stacks').forEach(function (stack) {
              stack.setProperties({
                usePublicRepo: false,
                useLocalRepo: true
              });
              stack.cleanReposBaseUrls();
          });
        }
      }.observes('networkIssuesExist'),
    to
    
      /**
       * Use Local Repo if some network issues exist
       */
      onNetworkIssuesExist: function () {
        if (this.get('networkIssuesExist')) {
          this.get('content.stacks').forEach(function (stack) {
            if(stack.get('useLocalRepo') != true){
              stack.setProperties({
                usePublicRepo: false,
                useLocalRepo: true
              });
              stack.cleanReposBaseUrls();
            } 
          });
        }
      }.observes('networkIssuesExist'),
    as per : https://github.com/apache/ambari/pull/2743/files
    
    Later as you have already deployed the cluster we need to reset the cluster (Caution : this will erase all the configs you have created previously in Step6 and also the Hosts and services you have selected need to select again )
    
    Command :
    
    ambari-server reset
    And hard reload the page and start the create cluster wizard again.
    
     
    
    Incase you have already at Step 9 and cannot proceed with ambari-server reset (as it invovles lots of Configs being added again , the below steps are for you )
    
    Preqrequesties : The cluster now is in Deployment step(step 9 ) and you have only retry button to press
    
    steps
    
    1) Stop ambari-server
    
    2) login to Database
    
    3) use the below command to list out all the contents in repo_definition table :
    
     select * from repo_definition; 
    4) you can see the base_url will be empty for the all the Rows in the table
    
    5) Correct the base_url for every rows and update it using the command :
    
    update repo_definition set base_url='<YOUR BASE URL>' where id=<THE CORESPONDING ID>;
    for ex :
    
    update repo_definition set base_url='http://asnaik.example.com/localrepo/HDP-3.1' where id=9;
    6) after correcting all the base_url columns in repo_definition table and also delete the empty repos created by ambari from location /etc/yum.repos.d
    
    7) start ambari, Login to UI and press retry button, The Installation will work as smooth as it can be.
    
    Hope this helps.
  • 相关阅读:
    2016/11/17 周四 <javascript的封装简单示例>
    JavaScript资源大全中文版(Awesome最新版转载自张果老师博客)
    <web Font的使用>
    博客园首页飘彩色雪花代码
    C#多线程
    SQL Server数据库优化措施:索引优化(转)
    HOWTO: InstallShield中如何实现MSI包的权限提升(转)
    C# 获取操作系统版本信息
    installshield msi程序安装问题
    bat和VBS
  • 原文地址:https://www.cnblogs.com/nshuai/p/13404273.html
Copyright © 2020-2023  润新知