WDL可以使用本地、集群、云端三种模式来跑流程,本地运行是不需要服务器后台配置文件,而另外两种需要配置文件。
本地运行
下载cromwell和womtool到本地服务器,地址:https://github.com/broadinstitute/cromwell/releases
不建议下载最新版本,我试了下最新的version 78是报错的,好像是java版本的匹配问题。
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/hsqldb/jdbcDriver has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0
我这里以version 51为例。
示例一
编写echo.wdl
workflow wf_echo {
call echo
output {
echo.outFile
echo.content
}
}
task echo {
String out
command {
echo Hello World! > ${out}
}
output {
File outFile = "${out}"
Array[String] content = read_lines(outFile)
}
}
womtool校验WDL:
java -jar womtool-51.jar validate echo.wdl
显示Success!
生成json:
java -jar womtool-51.jar inputs echo.wdl >echo.json
修改echo.json内容,配置输入文件:
{
"wf_echo.echo.out": "hello_world"
}
cromwell运行WDL脚本:
java -jar cromwell-51.jar run echo.wdl --inputs echo.json
注意查看运行状态status是 'Succeeded'还是'Failed'。
流程执行完毕默认会在运行流程的目录下生成两个目录,cromwell-executions和cromwell-workflow-logs分别是执行步骤和log目录。cromwell-executions目录结构如下:
wf_echo/
└── d62e94fe-372d-434c-abcb-144036f26935
└── call-echo
├── execution
│ ├── hello_world
│ ├── rc
│ ├── script
│ ├── script.background
│ ├── script.submit
│ ├── stderr
│ ├── stderr.background
│ ├── stdout
│ └── stdout.background
└── tmp.d25a3769
每次运行都会生成一串字符串目录(不会覆盖之前的结果),每个task都有类似的目录结果。私以为执行速度很慢(调用的东西很多),且过程文件太多了!
目标结果:
$ cat hello_world
Hello World!
示例二
一个稍微复杂点的例子,并行多输出。看看它的结果目录。
test.wdl:
workflow testwdl {
Int? thread = 6
String varwdl
String prefix
Array[Int] intarray = [1,2,3,4,5]
if(thread>5) {
call taska {
input:
vara = varwdl,
infile = taskb.outfile,
prefix = prefix
}
}
scatter (sample in intarray) {
call taskb {
input:
varb = 'testb',
thread = thread,
prefix = sample
}
}
}
task taska {
String vara
Array[File] infile
String prefix
command {
cat ${sep=" " infile} >${prefix}_${vara}.txt
}
}
task taskb {
String varb
Int thread
String prefix
command {
echo ${varb} ${thread} >${prefix}.txt
}
output {
File outfile = '${prefix}.txt'
}
}
test.json:
{
"testwdl.varwdl": "hellowdl",
"testwdl.prefix": "testwdl"
}
执行java -jar cromwell-51.jar run test.wdl --inputs test.json
生成的目录结果:
23ab84c5-f219-4f2d-852f-677df6811a0b
├── call-taska
│ ├── execution
│ │ ├── rc
│ │ ├── script
│ │ ├── script.background
│ │ ├── script.submit
│ │ ├── stderr
│ │ ├── stderr.background
│ │ ├── stdout
│ │ ├── stdout.background
│ │ └── testwdl_hellowdl.txt
│ ├── inputs
│ │ ├── -1507720077
│ │ │ └── 3.txt
│ │ ├── 2086182641
│ │ │ └── 1.txt
│ │ ├── 289231282
│ │ │ └── 2.txt
│ │ ├── -806655499
│ │ │ └── 5.txt
│ │ └── 990295860
│ │ └── 4.txt
│ └── tmp.de320778
└── call-taskb
├── shard-0
│ ├── execution
│ │ ├── 1.txt
│ │ ├── rc
│ │ ├── script
│ │ ├── script.background
│ │ ├── script.submit
│ │ ├── stderr
│ │ ├── stderr.background
│ │ ├── stdout
│ │ └── stdout.background
│ └── tmp.eba86162
├── shard-1
│ ├── execution
│ │ ├── 2.txt
│ │ ├── rc
│ │ ├── script
│ │ ├── script.background
│ │ ├── script.submit
│ │ ├── stderr
│ │ ├── stderr.background
│ │ ├── stdout
│ │ └── stdout.background
│ └── tmp.658f2d2f
├── shard-2
│ ├── execution
│ │ ├── 3.txt
│ │ ├── rc
│ │ ├── script
│ │ ├── script.background
│ │ ├── script.submit
│ │ ├── stderr
│ │ ├── stderr.background
│ │ ├── stdout
│ │ └── stdout.background
│ └── tmp.ae04eda0
├── shard-3
│ ├── execution
│ │ ├── 4.txt
│ │ ├── rc
│ │ ├── script
│ │ ├── script.background
│ │ ├── script.submit
│ │ ├── stderr
│ │ ├── stderr.background
│ │ ├── stdout
│ │ └── stdout.background
│ └── tmp.bcfe9d45
└── shard-4
├── execution
│ ├── 5.txt
│ ├── rc
│ ├── script
│ ├── script.background
│ ├── script.submit
│ ├── stderr
│ ├── stderr.background
│ ├── stdout
│ └── stdout.background
└── tmp.2e004f34
集群运行
cromwell 不仅支持本地计算机任务调度,同时支持集群/云计算作业管理系统,只需要进行简单配置,就可以实现大规模计算。
官方针对不同的集群/云作业管理系统提供了相关的配置文件(https://github.com/broadinstitute/cromwell/tree/develop/cromwell.example.backends),但是本质都是讲调度命令嵌入其中。
SGE配置:backend.conf
include required(classpath("application"))
backend {
default = SGE
# sge config
providers {
SGE {
actor-factory = "cromwell.backend.impl.sfs.config.ConfigBackendLifecycleActorFactory"
config {
# Limits the number of concurrent jobs
concurrent-job-limit = 50
# Warning: If set, Cromwell will run 'check-alive' for every job at this interval
# exit-code-timeout-seconds = 120
runtime-attributes = """
Int cpu = 8
Float? memory_gb
String? sge_queue
String? sge_project
"""
submit = """
qsub \
-terse \
-N ${job_name} \
-wd ${cwd} \
-o ${out}.out \
-e ${err}.err \
${"-pe smp " + cpu} \
${"-l mem_free=" + memory_gb + "g"} \
${"-q " + sge_queue} \
${"-P " + sge_project} \
${script}
"""
kill = "qdel ${job_id}"
check-alive = "qstat -j ${job_id}"
job-id-regex = "(\\d+)"
# filesystem config
filesystems {
local {
localization: [
"hard-link","soft-link", "copy"
]
caching {
duplication-strategy: [
"hard-link","soft-link", "copy"
]
# Default: "md5"
hashing-strategy: "md5"
# Default: 10485760 (10MB).
fingerprint-size: 10485760
# Default: false
check-sibling-md5: false
}
}
}
}
}
}
}
提交命令:
java -Dconfig.file=backend.conf -jar cromwell-51.jar run test.wdl --inputs test.json
若有Docker,也需要配置,示例如下:
dockerRoot=/cromwell-executions
backend {
default = Docker
providers {
# Example backend that _only_ runs workflows that specify docker for every command.
Docker {
actor-factory = "cromwell.backend.impl.sfs.config.ConfigBackendLifecycleActorFactory"
config {
run-in-background = true
runtime-attributes = "String docker"
# 嵌入 docker 的运行命令
# docker_cwd 通过 dockerRoot(默认 /cromwell-executions) 设置, 与当前目录(${cwd})下 ./cromwell-executions 相对应
submit-docker = "docker run --rm -v ${cwd}:${docker_cwd} -i ${docker} /bin/bash < ${docker_script}"
}
}
}
关于云端的配置,运营商基本上已经配好了,我们只需要只用它的接口即可,不行就找技术支持。
Ref:
https://www.jianshu.com/p/b396f9fc15e9
https://www.jianshu.com/p/91a4d799bde5
https://zhuanlan.zhihu.com/p/417633670
https://github.com/broadinstitute/cromwell/blob/develop/cromwell.example.backends/cromwell.examples.conf