http://www.cnblogs.com/lexus/archive/2011/01/10/1931923.html
http://www.cnblogs.com/lexus/archive/2010/12/20/1911940.html?login=1
1)提交变更到本地github
github add .
git commit -a
git status
2)提交变更到远程
git push linode master
3)由于这次将开发和生产分开了,所以在本地做了一些处理,生产库从开发库中复制而来,并删除了几个表中的数据
命名为xxx_production
4)修改了原先deploy.rb,以适用新的使用production做为数据库的情况
修改了远程的/shared/config/database.yml中的production的数据库名称以相匹配
在远程先创建该数据库使用mysql -uroot -pxxxx
show databases;
create database xxxx_production;
之后使用改正好的deploy执行cap sync:up来将本地数据库导出->压缩->上传->解压->还原到xxxx_production
之后执行cap deploy来更新修改的代码
由于deploy.rb的代码很多是东一块西一块copy来的,结构很乱,所以想重写,在刚写了个头的时候放弃的,主要是卡在shell上对shell还不熟悉,天色晚了就算了,下次再说了
此次的更新的变动还是很大的,从 rails3.01->3.03
很多细节做了改动
更新nginx.conf以支持上传大文件
使用了cap nginx:conf_down和nginx:conf_upload来上传和下载配置文件,但是还不能做到sudo ./sbin/nginx -s reload重启nginx,原因是sudoer 关于capistrano这一块没搞通
进入到xxxx/current下执行
ar_sendmail_rails3 -d -e production --delay 4
以便后台启动邮件监控进程
修改了rake edm:everyday_tuan部分的代码,只向自己发测试邮件,待正式上线后再修改来向外部发送
http://serverfault.com/questions/200122/capistrano-requiring-root-password-to-deploy-bad
Remote Cache
In most cases you want to use this option, otherwise each deploy will do a full repository clone every time.
set :deploy_via, :remote_cache
Remote caching will keep a local git repo on the server you’re deploying to and simply run a fetch from that rather than an entire clone. This is probably the best option as it will only fetch the changes since the last.
Shallow Clone
As an alternative to the remote cache approach, you can use shallow cloning.
set :git_shallow_clone, 1
Shallow cloning will do a clone each time, but will only get the top commit, not the entire repo. This makes it a bit closer to how an svn checkout works. Be warned, shallow clone won’t work well with the set :branch
option.
about capistrano some configuration tutorial