• phoenix连接hbase数据库,创建二级索引报错:Error: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions: Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeou


    环境描述

    •   操作系统版本:CentOS release 6.5 (Final)
    •   内核版本:2.6.32-431.el6.x86_64
    •   phoenix版本:phoenix-4.10.0
    •   hbase版本:hbase-1.2.6
    •   表SYNC_BUSINESS_INFO_BYDAY数据库量:990+

    问题描述:

    通过phoenix客户端连接hbase数据库,创建二级索引时,报下面的错误

    0: jdbc:phoenix:host-10-191-5-226> create index SYNC_BUSINESS_INFO_BYDAY_IDX_1 on SYNC_BUSINESS_INFO_BYDAY(day_id) include(id,channel_type,net_type,prov_id,area_id,city_code,channel_id,staff_id,trade_num,sync_file_name,sync_date);

    Error: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:

    Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row '20171231103826898918737449335226' on table 'SYNC_BUSINESS_INFO_BYDAY' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041 (state=08000,code=101)

    org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:

    Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row '20171231103826898918737449335226' on table 'SYNC_BUSINESS_INFO_BYDAY' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041

     

             at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)

             at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:852)

             at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:796)

             at org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)

             at org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)

             at org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)

             at org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)

             at org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)

             at org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:754)

             at org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)

             at org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:124)

             at org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3332)

             at org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1302)

             at org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1584)

             at org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)

             at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:358)

             at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:341)

             at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

             at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:339)

             at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1511)

             at sqlline.Commands.execute(Commands.java:822)

             at sqlline.Commands.sql(Commands.java:732)

             at sqlline.SqlLine.dispatch(SqlLine.java:813)

             at sqlline.SqlLine.begin(SqlLine.java:686)

             at sqlline.SqlLine.start(SqlLine.java:398)

             at sqlline.SqlLine.main(SqlLine.java:291)

    Caused by: java.util.concurrent.ExecutionException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:

    Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row '20171231103826898918737449335226' on table 'SYNC_BUSINESS_INFO_BYDAY' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041

     

             at java.util.concurrent.FutureTask.report(FutureTask.java:122)

             at java.util.concurrent.FutureTask.get(FutureTask.java:206)

             at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:847)

             ... 24 more

    Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:

    Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row '20171231103826898918737449335226' on table 'SYNC_BUSINESS_INFO_BYDAY' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041

     

             at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)

             at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:146)

             at org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)

             at org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)

             at org.apache.phoenix.iterate.LookAheadResultIterator.peek(LookAheadResultIterator.java:73)

             at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)

             at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)

             at java.util.concurrent.FutureTask.run(FutureTask.java:266)

             at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)

             at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

             at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

             at java.lang.Thread.run(Thread.java:745)

    Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:

    Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row '20171231103826898918737449335226' on table 'SYNC_BUSINESS_INFO_BYDAY' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041

     

             at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)

             at org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:65)

             at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:139)

             ... 10 more

    Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:

    Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row '20171231103826898918737449335226' on table 'SYNC_BUSINESS_INFO_BYDAY' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041

     

             at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:276)

             at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:210)

             at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)

             at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)

             at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326)

             at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:409)

             at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:370)

             at org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55)

             ... 11 more

    Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row '20171231103826898918737449335226' on table 'SYNC_BUSINESS_INFO_BYDAY' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041

             at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:169)

             at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)

             ... 3 more

    Caused by: java.io.IOException: Call to host-10-191-5-227/10.191.5.227:16020 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=153, waitTime=60001, operationTimeout=60000 expired.

             at org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:292)

             at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1271)

             at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)

             at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)

             at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)

             at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:219)

             at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:64)

             at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)

             at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)

             at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)

             at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)

             ... 4 more

    Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=153, waitTime=60001, operationTimeout=60000 expired.

             at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:73)

             at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1245)

             ... 13 more

    问题分析

    通过以上的错误可以知道hbase.ipc.CallTimeout即在phoenix客户端进行操作的时候,存在调用的时间超过了默认设置的超时时间。

    问题解决:

    那么是关于phoenix使用hbase的一些设置,就需要在phoenix客户端的hbase-site.xml配置文件中进行修改或配置。

    phoenixhbase-site.xml文件的位置:

    在网上找了很多这个问题的解决方法,但是就是没有解决。最后,在BING搜索一文章,有如下的参考配置。

    修改过程

    1.修改phoenixhbase-site.xml配置文件为如下:

    <configuration>
        <property>
            <name>hbase.regionserver.wal.codec</name>
            <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
        </property>
        <property>
            <name>phoenix.query.timeoutMs</name>
            <value>1800000</value>
        </property>    
        <property>
            <name>hbase.regionserver.lease.period</name>
            <value>1200000</value>
        </property>
        <property>
            <name>hbase.rpc.timeout</name>
            <value>1200000</value>
        </property>
        <property>
            <name>hbase.client.scanner.caching</name>
            <value>1000</value>
        </property>
        <property>
            <name>hbase.client.scanner.timeout.period</name>
            <value>1200000</value>
        </property>
    </configuration>

    2.设置环境变量HBASE_CONF_PATH

    [aiprd@host-10-191-5-227 phoenix-4.10.0]$ export HBASE_CONF_PATH=/mnt/aiprd/app/phoenix-4.10.0/bin

    [aiprd@host-10-191-5-227 phoenix-4.10.0]$ echo $HBASE_CONF_PATH

    /mnt/aiprd/app/phoenix-4.10.0/bin

    备注:这个环境变量是指向hbase-site.xml的位置,如果不设置这个环境变量,可能使用到的还是默认的配置,我们要通过hbase-site.xml中的配置覆盖掉默认的配置。

    :如果不设置HBASE_CONF_PATH,比如以下的错误可能会报错:

    0: jdbc:phoenix:host-10-191-5-226> create index SYNC_BUSINESS_INFO_BYDAY_IDX_1 on SYNC_BUSINESS_INFO_BYDAY(day_id) include(id,channel_type,net_type,prov_id,area_id,city_code,channel_id,staff_id,trade_num,sync_file_name,sync_date);

    18/03/06 14:18:55 WARN client.ScannerCallable: Ignore, probably already closed

    org.apache.hadoop.hbase.UnknownScannerException: org.apache.hadoop.hbase.UnknownScannerException: Unknown scanner '2842'. This can happen due to any of the following reasons: a) Scanner id given is wrong, b) Scanner lease expired because of long wait between consecutive client checkins, c) Server may be closing down, d) RegionServer restart during upgrade.

    If the issue is due to reason (b), a possible fix would be increasing the value of'hbase.client.scanner.timeout.period' configuration.

             at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2394)

             at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)

             at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)

             at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)

             at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)

             at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)

             at java.lang.Thread.run(Thread.java:745)

     

             at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

             at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

             at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

             at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

             at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)

             at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)

             at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:329)

             at org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:379)

             at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199)

             at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:145)

             at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)

             at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)

             at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326)

             at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:264)

             at org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:247)

             at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:540)

             at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:370)

             at org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55)

             at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:139)

             at org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)

             at org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)

             at org.apache.phoenix.iterate.LookAheadResultIterator.peek(LookAheadResultIterator.java:73)

             at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)

             at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)

             at java.util.concurrent.FutureTask.run(FutureTask.java:266)

             at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)

             at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

             at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

             at java.lang.Thread.run(Thread.java:745)

    Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.UnknownScannerException): org.apache.hadoop.hbase.UnknownScannerException: Unknown scanner '2842'. This can happen due to any of the following reasons: a) Scanner id given is wrong, b) Scanner lease expired because of long wait between consecutive client checkins, c) Server may be closing down, d) RegionServer restart during upgrade.

    If the issue is due to reason (b), a possible fix would be increasing the value of'hbase.client.scanner.timeout.period' configuration.

             at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2394)

             at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)

             at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)

             at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)

             at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)

             at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)

             at java.lang.Thread.run(Thread.java:745)

     

             at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1267)

             at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)

             at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)

             at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)

             at org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:377)

             ... 21 more

    Traceback (most recent call last):

      File "bin/sqlline.py", line 112, in <module>

        (output, error) = childProc.communicate()

      File "/usr/lib64/python2.6/subprocess.py", line 728, in communicate

    Exception in thread "SIGINT handler" java.lang.RuntimeException: java.sql.SQLFeatureNotSupportedException

             at sqlline.SunSignalHandler.handle(SunSignalHandler.java:43)

             at sun.misc.Signal$1.run(Signal.java:212)

             at java.lang.Thread.run(Thread.java:745)

    Caused by: java.sql.SQLFeatureNotSupportedException

             at org.apache.phoenix.jdbc.PhoenixStatement.cancel(PhoenixStatement.java:1398)

             at sqlline.DispatchCallback.forceKillSqlQuery(DispatchCallback.java:83)

             at sqlline.SunSignalHandler.handle(SunSignalHandler.java:38)

             ... 2 more

        self.wait()

      File "/usr/lib64/python2.6/subprocess.py", line 1302, in wait

        pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0)

      File "/usr/lib64/python2.6/subprocess.py", line 462, in _eintr_retry_call

        return func(*args)

    KeyboardInterrupt

    3.设置完以上内容后,重新通过sqlline.py连接hbase

    [aiprd@host-10-191-5-227 phoenix-4.10.0]$ bin/sqlline.py host-10-191-5-226

    Setting property: [incremental, false]

    Setting property: [isolation, TRANSACTION_READ_COMMITTED]

    issuing: !connect jdbc:phoenix:host-10-191-5-226 none none org.apache.phoenix.jdbc.PhoenixDriver

    Connecting to jdbc:phoenix:host-10-191-5-226

    18/03/06 14:21:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

    Connected to: Phoenix (version 4.10)

    Driver: PhoenixEmbeddedDriver (version 4.10)

    Autocommit status: true

    Transaction isolation: TRANSACTION_READ_COMMITTED

    Building list of tables and columns for tab-completion (set fastconnect to true to skip)...

    711/711 (100%) Done

    Done

    sqlline version 1.2.0

    0: jdbc:phoenix:host-10-191-5-226> #出现此提示符表示已经成功连接到hbase数据库。

    4.重新对大表创建二级索引

    0: jdbc:phoenix:host-10-191-5-226> create index SYNC_BUSINESS_INFO_BYDAY_IDX_1 on SYNC_BUSINESS_INFO_BYDAY(day_id) include(id,channel_type,net_type,prov_id,area_id,city_code,channel_id,staff_id,trade_num,sync_file_name,sync_date);

    9,926,173 rows affected (397.567 seconds)

    注:创建二级索引完成,用时:397.567 seconds,表中数据量:9,926,173 rows

    5.查看二级索引的状态是否为ACTIVE

    0: jdbc:phoenix:host-10-191-5-226> !tables

     

    注:索引创建成功,且状态为ACTIVE即可用状态。

    小结:

    以上问题,主要是phoenix中的配置问题,在配置的过程中对于HBASE_CONF_PATH这个环境变量一定要注意,必须让其生效,否则即使在hbase-site.xml中进行了配置,基于大表创建索引还是会报错。

     

    文档创建时间:2018年3月6日15:11:33

    参考文章:https://community.hortonworks.com/content/supportkb/49037/phoenix-sqlline-query-on-larger-data-set-fails-wit.html

  • 相关阅读:
    Linux基础之文件管理(高级)上等相关内容-96
    Linux基础之文件管理(基础)等相关内容-95
    Linux基础之初识shell之系统命令基础等相关内容-94
    Linux基础之操作系统启动流程等相关内容-93
    人常犯的三种愚蠢
    数据挖掘科学家
    记住
    但行好事,莫问前程
    记住发生在身上的事,不要小心眼--活的明白
    语言要简洁
  • 原文地址:https://www.cnblogs.com/chuanzhang053/p/8514477.html
Copyright © 2020-2023  润新知