Seafdav patch
This commit is contained in:
parent
f330477785
commit
9d27483fe7
21 changed files with 1730 additions and 122 deletions
6
.gitignore
vendored
6
.gitignore
vendored
|
@ -1,3 +1,9 @@
|
|||
*~
|
||||
*#
|
||||
*.log
|
||||
*.gz
|
||||
/seafdav.conf
|
||||
/seafdav.fcgi.conf
|
||||
.DS_Store
|
||||
.cache
|
||||
.coverage
|
||||
|
|
69
.travis.yml
69
.travis.yml
|
@ -1,46 +1,31 @@
|
|||
dist: bionic
|
||||
language: python
|
||||
dist: jammy
|
||||
|
||||
matrix:
|
||||
include:
|
||||
- python: "3.11" # EOL 2027-10-24
|
||||
env: TOXENV=check,py311
|
||||
- python: "3.10" # EOL 2026-10-04
|
||||
env: TOXENV=py310
|
||||
- python: "3.9" # EOL 2025-10-05
|
||||
env: TOXENV=py39
|
||||
- python: "3.8" # EOL 2024-10-14
|
||||
env: TOXENV=py38
|
||||
- python: "3.7" # EOL 2023-06-27
|
||||
env: TOXENV=py37
|
||||
# - python: "3.6" # EOL 2021-12-21
|
||||
# env: TOXENV=py36
|
||||
# - python: "3.5" # EOL 2020-09-13
|
||||
# env: TOXENV=py35
|
||||
# - python: "3.4" # EOL 2019-03-18
|
||||
# env: TOXENV=py34
|
||||
- python: "3.12-dev"
|
||||
env: TOXENV=py312
|
||||
allow_failures:
|
||||
- python: "3.12-dev"
|
||||
env: TOXENV=py312
|
||||
|
||||
python:
|
||||
- "3.6"
|
||||
compiler:
|
||||
- gcc
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- valac
|
||||
- uuid-dev
|
||||
- libevent-dev
|
||||
- libarchive-dev
|
||||
- intltool
|
||||
- libjansson-dev
|
||||
- libonig-dev
|
||||
- libfuse-dev
|
||||
- net-tools
|
||||
- libglib2.0-dev
|
||||
- sqlite3
|
||||
- libsqlite3-dev
|
||||
- libonig-dev
|
||||
- libcurl4-openssl-dev
|
||||
before_install:
|
||||
# See issue #80: litmus fails to build on travis
|
||||
# The branch 'travis-litmus' still has this enabled to investigate...
|
||||
# - sudo apt-get install libneon27-dev
|
||||
# - ./install_litmus.sh
|
||||
|
||||
services:
|
||||
- redis-server
|
||||
|
||||
- chmod +x ci/install-deps.sh
|
||||
- chmod +x ci/functests.sh
|
||||
- pip install -r ./ci/requirements.txt
|
||||
install:
|
||||
- travis_retry pip install -U pip setuptools # for Py37
|
||||
- travis_retry pip install -U tox coveralls coverage
|
||||
|
||||
- "./ci/install-deps.sh"
|
||||
script:
|
||||
- travis_retry tox
|
||||
|
||||
after_success:
|
||||
- coverage combine
|
||||
- coveralls
|
||||
- "./ci/functests.sh init && ./ci/functests.sh runserver && ./ci/functests.sh test"
|
||||
|
|
3
LICENSE
3
LICENSE
|
@ -1,6 +1,9 @@
|
|||
The MIT License
|
||||
|
||||
Copyright (c) 2009-2023 Martin Wendt, (Original PyFileServer (c) 2005 Ho Chun Wei)
|
||||
Copyright (c) 2012-present Seafile Ltd.
|
||||
|
||||
Seafile webdav server is based on WsgiDAV.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
6
Makefile
Normal file
6
Makefile
Normal file
|
@ -0,0 +1,6 @@
|
|||
all: seafdav.tar.gz
|
||||
|
||||
seafdav.tar.gz:
|
||||
git archive HEAD wsgidav | gzip > seafdav.tar.gz
|
||||
clean:
|
||||
rm -f *.gz
|
98
README.md
98
README.md
|
@ -1,38 +1,19 @@
|
|||
#  WsgiDAV
|
||||
[](https://app.travis-ci.com/github/mar10/wsgidav)
|
||||
[](https://pypi.python.org/pypi/WsgiDAV/)
|
||||
[](https://github.com/mar10/wsgidav/blob/master/LICENSE)
|
||||
[](http://wsgidav.readthedocs.io/)
|
||||
[](https://github.com/ambv/black)
|
||||
[](https://github.com/mar10/yabs)
|
||||
[](https://stackoverflow.com/questions/tagged/WsgiDAV)
|
||||
# Seafile WebDAV Server [](http://travis-ci.org/haiwen/seafdav)
|
||||
|
||||
This is the WebDAV server for seafile.
|
||||
|
||||
[ Open in Visual Studio Code (experimental)](https://open.vscode.dev/mar10/wsgidav)
|
||||
See [Seafile Server Manual](http://manual.seafile.com/extension/webdav.html) for details.
|
||||
|
||||
<!-- [](https://open.vscode.dev/mar10/wsgidav) -->
|
||||
# Running
|
||||
|
||||
A generic and extendable [WebDAV](http://www.ietf.org/rfc/rfc4918.txt) server
|
||||
written in Python and based on [WSGI](http://www.python.org/dev/peps/pep-3333/).
|
||||
There are a template for running seafdav:
|
||||
- run.sh.template: This is for running seafdav on the default 8080 port with a built-in CherryPy server.
|
||||
|
||||
Main features:
|
||||
To run on 8080 port:
|
||||
|
||||
- WsgiDAV is a stand-alone WebDAV server with SSL support, that can be
|
||||
installed and run as Python command line script on Linux, OSX, and Windows:<br>
|
||||
```
|
||||
$ pip install wsgidav cheroot
|
||||
$ wsgidav --host=0.0.0.0 --port=80 --root=/tmp --auth=anonymous
|
||||
Running without configuration file.
|
||||
10:54:16.597 - INFO : WsgiDAV/4.0.0-a1 Python/3.9.1 macOS-12.0.1-x86_64-i386-64bit
|
||||
10:54:16.598 - INFO : Registered DAV providers by route:
|
||||
10:54:16.598 - INFO : - '/:dir_browser': FilesystemProvider for path '/Users/martin/prj/git/wsgidav/wsgidav/dir_browser/htdocs' (Read-Only) (anonymous)
|
||||
10:54:16.599 - INFO : - '/': FilesystemProvider for path '/tmp' (Read-Write) (anonymous)
|
||||
10:54:16.599 - WARNING : Basic authentication is enabled: It is highly recommended to enable SSL.
|
||||
10:54:16.599 - WARNING : Share '/' will allow anonymous write access.
|
||||
10:54:16.813 - INFO : Running WsgiDAV/4.0.0-a1 Cheroot/8.5.2 Python 3.9.1
|
||||
10:54:16.813 - INFO : Serving on http://0.0.0.0:80 ...
|
||||
cp run.sh.template run.sh
|
||||
```
|
||||
Run `wsgidav --help` for a list of available options.<br>
|
||||
|
||||
- The [python-pam](https://github.com/FirefighterBlu3/python-pam) library is
|
||||
needed as extra requirement if pam-login authentication is used on Linux
|
||||
|
@ -41,56 +22,17 @@ Main features:
|
|||
$ pip install wsgidav[pam]
|
||||
$ wsgidav --host=0.0.0.0 --port=8080 --root=/tmp --auth=pam-login
|
||||
```
|
||||
Then change CCNET_CONF_DIR and SEAFILE_CONF_DIR to your Seafile server's settings.
|
||||
|
||||
- **Note:** Windows users may prefer the
|
||||
[MSI Installer](https://github.com/mar10/wsgidav/releases/latest)
|
||||
(see <kbd>Assets</kbd> section).
|
||||
# Testing
|
||||
|
||||
- WebDAV is a superset of HTTP, so WsgiDAV is also a performant, multi-threaded
|
||||
web server with SSL support.
|
||||
|
||||
- WsgiDAV is also a Python library that implements the WSGI protocol and can
|
||||
be run behind any WSGI compliant web server.<br>
|
||||
|
||||
- WsgiDAV is implemented as a configurable stack of WSGI middleware
|
||||
applications.<br>
|
||||
Its open architecture allows to extend the functionality and integrate
|
||||
WebDAV services into your project.<br>
|
||||
Typical use cases are:
|
||||
- Expose data structures as virtual, editable file systems.
|
||||
- Allow online editing of MS Office documents.
|
||||
|
||||
|
||||
## Status
|
||||
|
||||
[](https://pypi.python.org/pypi/WsgiDAV/)
|
||||
See the ([change log](https://github.com/mar10/wsgidav/blob/master/CHANGELOG.md)) for details.
|
||||
|
||||
**Note:** Release 4.0 introduces some refactorings and breaking changes.<br>
|
||||
See the ([change log](https://github.com/mar10/wsgidav/blob/master/CHANGELOG.md)) for details.
|
||||
|
||||
|
||||
## More info
|
||||
|
||||
* [Read The Docs](http://wsgidav.rtfd.org) for details.
|
||||
* [Discussion Group](https://github.com/mar10/wsgidav/discussions)
|
||||
* [Stackoverflow](http://stackoverflow.com/questions/tagged/wsgidav)
|
||||
|
||||
|
||||
## Credits
|
||||
|
||||
Contributors:
|
||||
|
||||
* WsgiDAV is a [refactored version](https://github.com/mar10/wsgidav/blob/master/docs/source/changelog04.md)
|
||||
of [PyFileServer 0.2](https://github.com/cwho/pyfileserver),
|
||||
Copyright (c) 2005 Ho Chun Wei.<br>
|
||||
Chun gave his approval to change the license from LGPL to MIT-License for
|
||||
this project.
|
||||
* <https://github.com/mar10/wsgidav/contributors>
|
||||
* Markus Majer for providing the logo (a mixture of the international
|
||||
maritime signal flag for 'W (Whiskey)' and a dove.)
|
||||
|
||||
|
||||
Any kind of feedback is very welcome!<br>
|
||||
Have fun :-)<br>
|
||||
Martin
|
||||
- start local seafile server
|
||||
- start local seahub server (While seafdav itself doesn't require seahub, we use seahub webapi as a driver for testing)
|
||||
- start seafdav server
|
||||
- create a test user `test@seafiletest.com` with password `test`
|
||||
- Run the tests
|
||||
```
|
||||
export CCNET_CONF_DIR=/path/to/ccnet
|
||||
export SEAFILE_CONF_DIR=/path/to/seafile-data
|
||||
./ci/functest.sh test
|
||||
```
|
||||
|
|
67
ci/functests.sh
Executable file
67
ci/functests.sh
Executable file
|
@ -0,0 +1,67 @@
|
|||
set -e
|
||||
if [ $# -lt "1" ]; then
|
||||
echo
|
||||
echo "Usage: ./functests.sh {init|runserver|test}"
|
||||
echo
|
||||
exit 1
|
||||
fi
|
||||
if [ ${TRAVIS} ] ;then
|
||||
set -x
|
||||
CCNET_CONF_DIR="/tmp/seafile-server/tests/conf"
|
||||
SEAFILE_CONF_DIR="/tmp/seafile-server/tests/conf/seafile-data"
|
||||
PYTHONPATH="/usr/local/lib/python3.6/site-packages:/tmp/seafobj:${PYTHONPATH}"
|
||||
export PYTHONPATH
|
||||
export CCNET_CONF_DIR
|
||||
export SEAFILE_CONF_DIR
|
||||
|
||||
fi
|
||||
|
||||
function start_server() {
|
||||
ccnet-server -c /tmp/seafile-server/tests/conf -f - &
|
||||
seaf-server -c /tmp/seafile-server/tests/conf -d /tmp/seafile-server/tests/conf/seafile-data -f -l - &
|
||||
sleep 2
|
||||
}
|
||||
|
||||
function init() {
|
||||
mkdir /tmp/seafile-server/tests/conf/seafile-data
|
||||
touch /tmp/seafile-server/tests/conf/seafile-data/seafile.conf
|
||||
cat > /tmp/seafile-server/tests/conf/seafile-data/seafile.conf << EOF
|
||||
[database]
|
||||
create_tables = true
|
||||
EOF
|
||||
touch ${CCNET_CONF_DIR}/seafile.ini
|
||||
cat > ${CCNET_CONF_DIR}/seafile.ini << EOF
|
||||
/tmp/seafile-server/tests/conf/seafile-data
|
||||
EOF
|
||||
start_server
|
||||
python -c "from seaserv import ccnet_api as api;api.add_emailuser('test@seafiletest.com','test',0,1)"
|
||||
}
|
||||
|
||||
function start_seafdav() {
|
||||
if [ ${TRAVIS} ]; then
|
||||
cd ${TRAVIS_BUILD_DIR}
|
||||
python -m wsgidav.server.server_cli --host=127.0.0.1 --port=8080 --root=/ --server=gunicorn &
|
||||
sleep 5
|
||||
fi
|
||||
}
|
||||
|
||||
function run_tests() {
|
||||
cd seafdav_tests
|
||||
py.test
|
||||
}
|
||||
|
||||
case $1 in
|
||||
"init")
|
||||
init
|
||||
;;
|
||||
"runserver")
|
||||
start_seafdav
|
||||
;;
|
||||
"test")
|
||||
run_tests
|
||||
;;
|
||||
*)
|
||||
echo "unknow command \"$1\""
|
||||
;;
|
||||
esac
|
||||
|
38
ci/install-deps.sh
Executable file
38
ci/install-deps.sh
Executable file
|
@ -0,0 +1,38 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -e -x
|
||||
|
||||
git clone --depth=1 --branch=master git://github.com/haiwen/libevhtp /tmp/libevhtp
|
||||
cd /tmp/libevhtp
|
||||
cmake -DEVHTP_DISABLE_SSL=ON -DEVHTP_BUILD_SHARED=OFF .
|
||||
make -j2
|
||||
sudo make install
|
||||
cd -
|
||||
|
||||
git clone --depth=1 --branch=master git://github.com/haiwen/libsearpc /tmp/libsearpc
|
||||
cd /tmp/libsearpc
|
||||
./autogen.sh
|
||||
./configure
|
||||
make -j2
|
||||
sudo make install
|
||||
cd -
|
||||
|
||||
git clone --depth=1 --branch=master git://github.com/haiwen/ccnet-server /tmp/ccnet-server
|
||||
cd /tmp/ccnet-server
|
||||
./autogen.sh
|
||||
./configure
|
||||
make -j2
|
||||
sudo make install
|
||||
cd -
|
||||
|
||||
git clone --depth=1 --branch=master git://github.com/haiwen/seafile-server /tmp/seafile-server
|
||||
cd /tmp/seafile-server
|
||||
./autogen.sh
|
||||
./configure
|
||||
make -j2
|
||||
sudo make install
|
||||
cd -
|
||||
|
||||
sudo ldconfig
|
||||
|
||||
git clone --depth=1 --branch=master git://github.com/haiwen/seafobj /tmp/seafobj
|
15
ci/requirements.txt
Normal file
15
ci/requirements.txt
Normal file
|
@ -0,0 +1,15 @@
|
|||
termcolor>=1.1.0
|
||||
requests>=2.8.0
|
||||
pytest>=3.3.2
|
||||
backports.functools_lru_cache>=1.4
|
||||
tenacity>=4.8.0
|
||||
defusedxml~=0.5
|
||||
Jinja2~=2.10
|
||||
jsmin~=2.2
|
||||
python-pam~=1.8
|
||||
PyYAML~=5.1
|
||||
six~=1.12
|
||||
gunicorn
|
||||
future
|
||||
lxml
|
||||
sqlalchemy
|
|
@ -3,3 +3,6 @@ Jinja2~=3.0
|
|||
json5~=0.8.5
|
||||
python-pam~=2.0
|
||||
PyYAML~=6.0
|
||||
six~=1.13
|
||||
lxml
|
||||
sqlalchemy
|
||||
|
|
10
run.sh.template
Normal file
10
run.sh.template
Normal file
|
@ -0,0 +1,10 @@
|
|||
#!/bin/bash
|
||||
|
||||
export CCNET_CONF_DIR=/data/data/ccnet
|
||||
export SEAFILE_CONF_DIR=/data/data/seafile-data
|
||||
|
||||
TOP_DIR=$(python -c "import os; print os.path.dirname(os.path.realpath('$0'))")
|
||||
|
||||
cd "$TOP_DIR"
|
||||
|
||||
python -m wsgidav.server.run_server --host=0.0.0.0 --port=8080 --root=/ --server=gunicorn
|
70
seafdav_tests/client.py
Normal file
70
seafdav_tests/client.py
Normal file
|
@ -0,0 +1,70 @@
|
|||
#coding: UTF-8
|
||||
|
||||
from easywebdav3 import easywebdav
|
||||
import os
|
||||
import io
|
||||
import posixpath
|
||||
from seaserv import seafile_api
|
||||
|
||||
USER = os.environ.get('SEAFILE_TEST_USERNAME', 'test@seafiletest.com')
|
||||
PASSWORD = os.environ.get('SEAFILE_TEST_PASSWORD', 'test')
|
||||
|
||||
def get_webapi_client():
|
||||
apiclient = seafile_api.connect('http://127.0.0.1:8000', USER, PASSWORD)
|
||||
return apiclient
|
||||
|
||||
class SeafDavClient(object):
|
||||
"""Wrapper around easywebdav to provide common operations on seafile webdav
|
||||
server.
|
||||
|
||||
Davfs2 would be a better option, but it's not supported on travis ci.
|
||||
"""
|
||||
server = '127.0.0.1'
|
||||
port = 8080
|
||||
user = USER
|
||||
password = PASSWORD
|
||||
|
||||
def __init__(self):
|
||||
self._dav = easywebdav.Client(self.server, port=self.port,
|
||||
username=self.user,
|
||||
password=self.password)
|
||||
|
||||
def list_repos(self):
|
||||
return [e for e in self._dav.ls('/') if e.name != '/']
|
||||
|
||||
def repo_listdir(self, repo, path='/'):
|
||||
repo_name = repo.get('name')
|
||||
path = posixpath.join('/', repo_name, path.lstrip('/'))
|
||||
if not path.endswith('/'):
|
||||
path += '/'
|
||||
entries = self._dav.ls(path)
|
||||
# the file entries list also contains the path iteself, we just filter it
|
||||
# out for convenience
|
||||
return [e for e in entries if e.name != path]
|
||||
|
||||
def repo_mkdir(self, repo, parentdir, dirname):
|
||||
repo_name = repo.get('name')
|
||||
fullpath = posixpath.join('/', repo_name, parentdir.lstrip('/'), dirname)
|
||||
self._dav.mkdir(fullpath)
|
||||
|
||||
def repo_getfile(self, repo, path):
|
||||
fobj = io.BytesIO()
|
||||
repo_name = repo.get('name')
|
||||
fullpath = posixpath.join('/', repo_name, path.lstrip('/'))
|
||||
self._dav.download(fullpath, fobj)
|
||||
return fobj.getvalue()
|
||||
|
||||
def repo_uploadfile(self, repo, localpath_or_fileobj, path):
|
||||
repo_name = repo.get('name')
|
||||
fullpath = posixpath.join('/', repo_name, path.lstrip('/'))
|
||||
self._dav.upload(localpath_or_fileobj, fullpath)
|
||||
|
||||
def repo_removedir(self, repo, path):
|
||||
repo_name = repo.get('name')
|
||||
fullpath = posixpath.join('/', repo_name, path.lstrip('/'))
|
||||
self._dav.rmdir(fullpath)
|
||||
|
||||
def repo_removefile(self, repo, path):
|
||||
repo_name = repo.get('name')
|
||||
fullpath = posixpath.join('/', repo_name, path.lstrip('/'))
|
||||
self._dav.delete(fullpath)
|
1
seafdav_tests/data/test.txt
Normal file
1
seafdav_tests/data/test.txt
Normal file
|
@ -0,0 +1 @@
|
|||
test
|
0
seafdav_tests/easywebdav3/__init__.py
Normal file
0
seafdav_tests/easywebdav3/__init__.py
Normal file
181
seafdav_tests/easywebdav3/easywebdav.py
Normal file
181
seafdav_tests/easywebdav3/easywebdav.py
Normal file
|
@ -0,0 +1,181 @@
|
|||
import requests
|
||||
import platform
|
||||
from numbers import Number
|
||||
import xml.etree.cElementTree as xml
|
||||
from collections import namedtuple
|
||||
from http.client import responses as HTTP_CODES
|
||||
from urllib.parse import urlparse
|
||||
|
||||
DOWNLOAD_CHUNK_SIZE_BYTES = 1 * 1024 * 1024
|
||||
|
||||
class WebdavException(Exception):
|
||||
pass
|
||||
|
||||
class ConnectionFailed(WebdavException):
|
||||
pass
|
||||
|
||||
|
||||
def codestr(code):
|
||||
return HTTP_CODES.get(code, 'UNKNOWN')
|
||||
|
||||
|
||||
File = namedtuple('File', ['name', 'size', 'mtime', 'ctime', 'contenttype'])
|
||||
|
||||
|
||||
def prop(elem, name, default=None):
|
||||
child = elem.find('.//{DAV:}' + name)
|
||||
return default if child is None else child.text
|
||||
|
||||
|
||||
def elem2file(elem):
|
||||
return File(
|
||||
prop(elem, 'href'),
|
||||
int(prop(elem, 'getcontentlength', 0)),
|
||||
prop(elem, 'getlastmodified', ''),
|
||||
prop(elem, 'creationdate', ''),
|
||||
prop(elem, 'getcontenttype', ''),
|
||||
)
|
||||
|
||||
|
||||
class OperationFailed(WebdavException):
|
||||
_OPERATIONS = dict(
|
||||
HEAD = "get header",
|
||||
GET = "download",
|
||||
PUT = "upload",
|
||||
DELETE = "delete",
|
||||
MKCOL = "create directory",
|
||||
PROPFIND = "list directory",
|
||||
)
|
||||
|
||||
def __init__(self, method, path, expected_code, actual_code):
|
||||
self.method = method
|
||||
self.path = path
|
||||
self.expected_code = expected_code
|
||||
self.actual_code = actual_code
|
||||
operation_name = self._OPERATIONS[method]
|
||||
self.reason = 'Failed to {operation_name} "{path}"'.format(**locals())
|
||||
expected_codes = (expected_code,) if isinstance(expected_code, Number) else expected_code
|
||||
expected_codes_str = ", ".join('{0} {1}'.format(code, codestr(code)) for code in expected_codes)
|
||||
actual_code_str = codestr(actual_code)
|
||||
msg = '''\
|
||||
{self.reason}.
|
||||
Operation : {method} {path}
|
||||
Expected code : {expected_codes_str}
|
||||
Actual code : {actual_code} {actual_code_str}'''.format(**locals())
|
||||
super(OperationFailed, self).__init__(msg)
|
||||
|
||||
class Client(object):
|
||||
def __init__(self, host, port=0, auth=None, username=None, password=None,
|
||||
protocol='http', verify_ssl=True, path=None, cert=None):
|
||||
if not port:
|
||||
port = 443 if protocol == 'https' else 80
|
||||
self.baseurl = '{0}://{1}:{2}'.format(protocol, host, port)
|
||||
if path:
|
||||
self.baseurl = '{0}/{1}'.format(self.baseurl, path)
|
||||
self.cwd = '/'
|
||||
self.session = requests.session()
|
||||
self.session.verify = verify_ssl
|
||||
self.session.stream = True
|
||||
|
||||
if cert:
|
||||
self.session.cert = cert
|
||||
|
||||
if auth:
|
||||
self.session.auth = auth
|
||||
elif username and password:
|
||||
self.session.auth = (username, password)
|
||||
|
||||
def _send(self, method, path, expected_code, **kwargs):
|
||||
url = self._get_url(path)
|
||||
response = self.session.request(method, url, allow_redirects=False, **kwargs)
|
||||
if isinstance(expected_code, Number) and response.status_code != expected_code \
|
||||
or not isinstance(expected_code, Number) and response.status_code not in expected_code:
|
||||
raise OperationFailed(method, path, expected_code, response.status_code)
|
||||
return response
|
||||
|
||||
def _get_url(self, path):
|
||||
path = str(path).strip()
|
||||
if path.startswith('/'):
|
||||
return self.baseurl + path
|
||||
return "".join((self.baseurl, self.cwd, path))
|
||||
|
||||
def cd(self, path):
|
||||
path = path.strip()
|
||||
if not path:
|
||||
return
|
||||
stripped_path = '/'.join(part for part in path.split('/') if part) + '/'
|
||||
if stripped_path == '/':
|
||||
self.cwd = stripped_path
|
||||
elif path.startswith('/'):
|
||||
self.cwd = '/' + stripped_path
|
||||
else:
|
||||
self.cwd += stripped_path
|
||||
|
||||
def mkdir(self, path, safe=False):
|
||||
expected_codes = 201 if not safe else (201, 301, 405)
|
||||
self._send('MKCOL', path, expected_codes)
|
||||
|
||||
def mkdirs(self, path):
|
||||
dirs = [d for d in path.split('/') if d]
|
||||
if not dirs:
|
||||
return
|
||||
if path.startswith('/'):
|
||||
dirs[0] = '/' + dirs[0]
|
||||
old_cwd = self.cwd
|
||||
try:
|
||||
for dir in dirs:
|
||||
try:
|
||||
self.mkdir(dir, safe=True)
|
||||
except Exception as e:
|
||||
if e.actual_code == 409:
|
||||
raise
|
||||
finally:
|
||||
self.cd(dir)
|
||||
finally:
|
||||
self.cd(old_cwd)
|
||||
|
||||
def rmdir(self, path, safe=False):
|
||||
path = str(path).rstrip('/') + '/'
|
||||
expected_codes = 204 if not safe else (204, 404)
|
||||
self._send('DELETE', path, expected_codes)
|
||||
|
||||
def delete(self, path):
|
||||
self._send('DELETE', path, 204)
|
||||
|
||||
def upload(self, local_path_or_fileobj, remote_path):
|
||||
if isinstance(local_path_or_fileobj, str):
|
||||
with open(local_path_or_fileobj, 'rb') as f:
|
||||
self._upload(f, remote_path)
|
||||
else:
|
||||
self._upload(local_path_or_fileobj, remote_path)
|
||||
|
||||
def _upload(self, fileobj, remote_path):
|
||||
self._send('PUT', remote_path, (200, 201, 204), data=fileobj)
|
||||
|
||||
def download(self, remote_path, local_path_or_fileobj):
|
||||
response = self._send('GET', remote_path, 200, stream=True)
|
||||
if isinstance(local_path_or_fileobj, str):
|
||||
with open(local_path_or_fileobj, 'wb') as f:
|
||||
self._download(f, response)
|
||||
else:
|
||||
self._download(local_path_or_fileobj, response)
|
||||
|
||||
def _download(self, fileobj, response):
|
||||
for chunk in response.iter_content(DOWNLOAD_CHUNK_SIZE_BYTES):
|
||||
fileobj.write(chunk)
|
||||
|
||||
def ls(self, remote_path='.'):
|
||||
headers = {'Depth': '1'}
|
||||
response = self._send('PROPFIND', remote_path, (207, 301), headers=headers)
|
||||
|
||||
# Redirect
|
||||
if response.status_code == 301:
|
||||
url = urlparse(response.headers['location'])
|
||||
return self.ls(url.path)
|
||||
|
||||
tree = xml.fromstring(response.content)
|
||||
return [elem2file(elem) for elem in tree.findall('{DAV:}response')]
|
||||
|
||||
def exists(self, remote_path):
|
||||
response = self._send('HEAD', remote_path, (200, 301, 404))
|
||||
return True if response.status_code != 404 else False
|
238
seafdav_tests/test_webdav.py
Normal file
238
seafdav_tests/test_webdav.py
Normal file
|
@ -0,0 +1,238 @@
|
|||
#coding: UTF-8
|
||||
|
||||
import time
|
||||
import os
|
||||
import io
|
||||
import unittest
|
||||
import posixpath
|
||||
import random
|
||||
import string
|
||||
from functools import wraps
|
||||
from contextlib import contextmanager
|
||||
from client import SeafDavClient, USER, PASSWORD
|
||||
from easywebdav3.easywebdav import OperationFailed as WebDAVOperationFailed
|
||||
from seaserv import seafile_api as api
|
||||
|
||||
davclient = SeafDavClient()
|
||||
TEST_REPO = None
|
||||
|
||||
def randstring(length=20):
|
||||
return ''.join(random.choice(string.ascii_lowercase) for i in range(length))
|
||||
|
||||
def dav_basename(f):
|
||||
if isinstance(f, str):
|
||||
path = f
|
||||
else:
|
||||
path = f.name
|
||||
return posixpath.basename(path.rstrip('/'))
|
||||
|
||||
@contextmanager
|
||||
def tmp_repo(name=None, desc=None):
|
||||
"""Create a temporary repo for test before the function exectutes, and delete
|
||||
the repo after that.
|
||||
|
||||
Usage:
|
||||
|
||||
with tmp_repo() as repo:
|
||||
... do things with repo ...
|
||||
"""
|
||||
name = name or randstring()
|
||||
desc = desc or randstring()
|
||||
repo_id = api.create_repo(name, desc, USER, enc_version=None)
|
||||
repo = {"id" : repo_id, "name" : name}
|
||||
try:
|
||||
yield repo
|
||||
finally:
|
||||
pass
|
||||
#api.remove_repo(repo_id)
|
||||
|
||||
def use_tmp_repo(func):
|
||||
"""Create a temporary repo for test before the function exectutes, and delete
|
||||
the repo after that.
|
||||
|
||||
Typical usage:
|
||||
|
||||
@use_tmp_repo
|
||||
def test_file_ops():
|
||||
repo = TEST_REPO
|
||||
... use `repo` to do things ...
|
||||
"""
|
||||
@wraps(func)
|
||||
def wrapper(*a, **kw):
|
||||
with tmp_repo() as _repo:
|
||||
global TEST_REPO
|
||||
TEST_REPO = _repo
|
||||
func(*a, **kw)
|
||||
return wrapper
|
||||
|
||||
class SeafDAVTestCase(unittest.TestCase):
|
||||
def test_list_repos(self):
|
||||
"""Test list repos in the top level."""
|
||||
def verify_repos_count(n=None):
|
||||
entries = davclient.list_repos()
|
||||
if n is not None:
|
||||
self.assertHasLen(entries, n)
|
||||
return entries
|
||||
|
||||
nrepos = len(verify_repos_count())
|
||||
|
||||
with tmp_repo() as repo:
|
||||
entries = verify_repos_count(nrepos + 1)
|
||||
self.assertIn(repo.get('name'), [dav_basename(f) for f in entries])
|
||||
|
||||
def test_file_ops(self):
|
||||
"""Test list/add/remove files and folders"""
|
||||
@use_tmp_repo
|
||||
def _test_under_path(path):
|
||||
repo = TEST_REPO
|
||||
path = path.rstrip('/')
|
||||
#sdir = repo.get_dir('/')
|
||||
parent_dir = '/'
|
||||
if path:
|
||||
dirs = [p for p in path.split('/') if p]
|
||||
for d in dirs:
|
||||
api.post_dir(repo.get('id'), parent_dir, d, USER)
|
||||
parent_dir = parent_dir + d + '/'
|
||||
entries = davclient.repo_listdir(repo, path)
|
||||
self.assertEmpty(entries)
|
||||
|
||||
# create a folder from webapi and list it in webdav
|
||||
dirname = 'folder-%s' % randstring()
|
||||
api.post_dir(repo.get('id'), parent_dir, dirname, USER)
|
||||
|
||||
entries = davclient.repo_listdir(repo, parent_dir)
|
||||
self.assertHasLen(entries, 1)
|
||||
sfolder = entries[0]
|
||||
self.assertEqual(dav_basename(sfolder), dirname)
|
||||
|
||||
# create a file from webapi and list it in webdav
|
||||
testfpath = os.path.join(os.path.dirname(__file__), 'data', 'test.txt')
|
||||
with open(testfpath, 'rb') as fp:
|
||||
testfcontent = fp.read()
|
||||
fname = 'uploaded-file-%s.txt' % randstring()
|
||||
api.post_file(repo.get('id'), testfpath, parent_dir, fname, USER)
|
||||
entries = davclient.repo_listdir(repo, parent_dir)
|
||||
self.assertHasLen(entries, 2)
|
||||
downloaded_file = davclient.repo_getfile(repo, posixpath.join(parent_dir, fname))
|
||||
assert downloaded_file == testfcontent
|
||||
|
||||
# create a folder through webdav, and check it in webapi
|
||||
dirname = 'another-level1-folder-%s' % randstring(10)
|
||||
davclient.repo_mkdir(repo, parent_dir, dirname)
|
||||
entries = api.list_dir_by_path(repo.get('id'), parent_dir)
|
||||
self.assertHasLen(entries, 3)
|
||||
davdir = [e for e in entries if e.obj_name == dirname][0]
|
||||
self.assertEqual(davdir.obj_name, dirname)
|
||||
|
||||
# upload a file through webdav, and check it in webapi
|
||||
fname = 'uploaded-file-%s' % randstring()
|
||||
repo_fpath = posixpath.join(parent_dir, fname)
|
||||
davclient.repo_uploadfile(repo, testfpath, repo_fpath)
|
||||
entries = api.list_dir_by_path(repo.get('id'), parent_dir)
|
||||
self.assertHasLen(entries, 4)
|
||||
|
||||
# remove a dir through webdav
|
||||
self.assertIn(dirname, [dirent.obj_name for dirent in \
|
||||
api.list_dir_by_path(repo.get('id'), parent_dir)])
|
||||
davclient.repo_removedir(repo, os.path.join(parent_dir, dirname))
|
||||
entries = api.list_dir_by_path(repo.get('id'), parent_dir)
|
||||
self.assertHasLen(entries, 3)
|
||||
self.assertNotIn(dirname, [dirent.obj_name for dirent in entries])
|
||||
|
||||
# remove a file through webdav
|
||||
self.assertIn(fname, [dirent.obj_name for dirent in \
|
||||
api.list_dir_by_path(repo.get('id'), parent_dir)])
|
||||
davclient.repo_removefile(repo, os.path.join(parent_dir, fname))
|
||||
entries = api.list_dir_by_path(repo.get('id'), parent_dir)
|
||||
self.assertHasLen(entries, 2)
|
||||
self.assertNotIn(fname, [dirent.obj_name for dirent in entries])
|
||||
|
||||
_test_under_path('/')
|
||||
_test_under_path('/level1-folder-%s' % randstring(10))
|
||||
_test_under_path('/level1-folder-%s/level2-folder-%s' %
|
||||
(randstring(5), randstring(5)))
|
||||
|
||||
def test_copy_move(self):
|
||||
"""Test copy/move files and folders."""
|
||||
# XXX: python-easwebday does not support webdav COPY/MOVE operation yet.
|
||||
# with tmp_repo() as ra:
|
||||
# with tmp_repo() as rb:
|
||||
# roota = ra.get_dir('/')
|
||||
# rootb = rb.get_dir('/')
|
||||
pass
|
||||
|
||||
def test_repo_name_conflict(self):
|
||||
"""Test the case when multiple repos have the same name"""
|
||||
repo_name = randstring(length=20)
|
||||
with tmp_repo(name=repo_name) as ra:
|
||||
with tmp_repo(name=repo_name) as rb:
|
||||
davrepos = davclient.list_repos()
|
||||
repos = [r for r in davrepos if dav_basename(r).startswith(repo_name)]
|
||||
self.assertHasLen(repos, 2)
|
||||
repos = sorted(repos, key = lambda x: x.name)
|
||||
if rb.get('id') < ra.get('id'):
|
||||
rb, ra = ra, rb
|
||||
self.assertEqual(dav_basename(repos[0]), '%s-%s' % (repo_name, ra.get('id')[:6]))
|
||||
self.assertEqual(dav_basename(repos[1]), '%s-%s' % (repo_name, rb.get('id')[:6]))
|
||||
|
||||
@use_tmp_repo
|
||||
def test_quota_check(self):
|
||||
"""Assert the user storage quota should not be exceeded"""
|
||||
assert api.set_user_quota(USER, 0) >= 0
|
||||
repo = TEST_REPO
|
||||
testfn = 'test.txt'
|
||||
testfpath = os.path.join(os.path.dirname(__file__), 'data', testfn)
|
||||
testfilesize = os.stat(testfpath).st_size
|
||||
api.post_file(repo.get('id'), testfpath, '/', '%s' % randstring(), USER)
|
||||
|
||||
_wait_repo_size_recompute(repo, testfilesize)
|
||||
with _set_quota(USER, testfilesize):
|
||||
with self.assertRaises(WebDAVOperationFailed) as cm:
|
||||
davclient.repo_uploadfile(repo, testfpath, '/%s' % randstring())
|
||||
self.assertEqual(cm.exception.actual_code, 403,
|
||||
'the operation should fail because quota is full')
|
||||
|
||||
# Attempts to create empty files should also fail
|
||||
with self.assertRaises(WebDAVOperationFailed) as cm:
|
||||
empty_fileobj = io.BytesIO()
|
||||
davclient.repo_uploadfile(repo, empty_fileobj, '/%s' % randstring())
|
||||
self.assertEqual(cm.exception.actual_code, 403,
|
||||
'the operation should fail because quota is full')
|
||||
|
||||
# After the quota restored, the upload should succeed
|
||||
repo_fpath = '/%s' % randstring()
|
||||
davclient.repo_uploadfile(repo, testfpath, repo_fpath)
|
||||
with open(testfpath, 'rb') as fp:
|
||||
assert fp.read() == davclient.repo_getfile(repo, repo_fpath)
|
||||
|
||||
def assertHasLen(self, obj, expected_length):
|
||||
actuallen = len(obj)
|
||||
msg = 'Expected length is %s, but actual lenght is %s' % (expected_length, actuallen)
|
||||
self.assertEqual(actuallen, expected_length, msg)
|
||||
|
||||
def assertEmpty(self, obj):
|
||||
self.assertHasLen(obj, 0)
|
||||
|
||||
@contextmanager
|
||||
def _set_quota(user, quota):
|
||||
"""Set the quota of the user to the given value, and restore the old value when exit"""
|
||||
oldquota = api.get_user_quota(user)
|
||||
if api.set_user_quota(user, quota) < 0:
|
||||
raise RuntimeError('failed to change user quota')
|
||||
assert api.get_user_quota(user) == quota
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
api.set_user_quota(user, oldquota)
|
||||
|
||||
|
||||
def _wait_repo_size_recompute(repo, size, maxretry=30):
|
||||
reposize = api.get_repo_size(repo.get('id'))
|
||||
retry = 0
|
||||
while reposize != size:
|
||||
if retry >= maxretry:
|
||||
assert False, 'repo size not recomputed in %s seconds' % maxretry
|
||||
retry += 1
|
||||
print('computed = %s, expected = %s' % (reposize, size))
|
||||
time.sleep(1)
|
||||
reposize = api.get_repo_size(repo.get('id'))
|
3
test-requirements.txt
Normal file
3
test-requirements.txt
Normal file
|
@ -0,0 +1,3 @@
|
|||
requests>=2.3.0
|
||||
nose
|
||||
pytest
|
123
wsgidav/dc/domain_controller.py
Normal file
123
wsgidav/dc/domain_controller.py
Normal file
|
@ -0,0 +1,123 @@
|
|||
import os
|
||||
import posixpath
|
||||
import ccnet
|
||||
from pysearpc import SearpcError
|
||||
from wsgidav.dc.seaf_utils import CCNET_CONF_DIR, SEAFILE_CENTRAL_CONF_DIR, multi_tenancy_enabled
|
||||
from wsgidav.dc import seahub_db
|
||||
import wsgidav.util as util
|
||||
from wsgidav.dc.base_dc import BaseDomainController
|
||||
# basic_auth_user, get_domain_realm, require_authentication
|
||||
_logger = util.get_module_logger(__name__)
|
||||
|
||||
# the block size for the cipher object; must be 16, 24, or 32 for AES
|
||||
BLOCK_SIZE = 32
|
||||
|
||||
import base64
|
||||
PADDING = '{'
|
||||
|
||||
# An encrypted block size must be a multiple of 16
|
||||
pad = lambda s: s + (16 - len(s) % 16) * PADDING
|
||||
# encrypt with AES, encode with base64
|
||||
EncodeAES = lambda c, s: base64.b64encode(c.encrypt(pad(s)))
|
||||
|
||||
class SeafileDomainController(BaseDomainController):
|
||||
|
||||
def __init__(self, wsgidav_app, config):
|
||||
self.ccnet_threaded_rpc = ccnet.CcnetThreadedRpcClient(posixpath.join(CCNET_CONF_DIR, 'ccnet-rpc.sock'))
|
||||
self.session_cls = seahub_db.init_db_session_class()
|
||||
|
||||
def __repr__(self):
|
||||
return self.__class__.__name__
|
||||
|
||||
def supports_http_digest_auth(self):
|
||||
# We have access to a plaintext password (or stored hash)
|
||||
return True
|
||||
|
||||
def get_domain_realm(self, inputURL, environ):
|
||||
return "Seafile Authentication"
|
||||
|
||||
def require_authentication(self, realmname, envrion):
|
||||
return True
|
||||
|
||||
def isRealmUser(self, realmname, username, environ):
|
||||
return True
|
||||
|
||||
def getRealmUserPassword(self, realmname, username, environ):
|
||||
"""
|
||||
Not applicable to seafile.
|
||||
"""
|
||||
return ""
|
||||
|
||||
def basic_auth_user(self, realmname, username, password, environ):
|
||||
if "'" in username:
|
||||
return False
|
||||
|
||||
try:
|
||||
ccnet_email = None
|
||||
session = None
|
||||
if self.session_cls:
|
||||
session = self.session_cls()
|
||||
|
||||
user = self.ccnet_threaded_rpc.get_emailuser(username)
|
||||
if user:
|
||||
ccnet_email = user.email
|
||||
else:
|
||||
if session:
|
||||
profile_profile = Base.classes.profile_profile
|
||||
q = session.query(profile_profile.user).filter(profile_profile.contact_email==username)
|
||||
res = q.first()
|
||||
if res:
|
||||
ccnet_email = res[0]
|
||||
|
||||
if not ccnet_email:
|
||||
_logger.warning('User %s doesn\'t exist', username)
|
||||
if session:
|
||||
session.close()
|
||||
return False
|
||||
|
||||
if self.ccnet_threaded_rpc.validate_emailuser(ccnet_email, password) != 0:
|
||||
if not session:
|
||||
return False
|
||||
else:
|
||||
from Crypto.Cipher import AES
|
||||
import seahub_settings
|
||||
secret = seahub_settings.SECRET_KEY[:BLOCK_SIZE]
|
||||
cipher = AES.new(secret, AES.MODE_ECB)
|
||||
encoded_str = 'aes$' + EncodeAES(cipher, password)
|
||||
options_useroptions = Base.classes.options_useroptions
|
||||
q = session.query(options_useroptions.email)
|
||||
q = q.filter(options_useroptions.email==ccnet_email,
|
||||
options_useroptions.option_val==encoded_str)
|
||||
res = q.first()
|
||||
if not res:
|
||||
session.close()
|
||||
return False
|
||||
|
||||
if session:
|
||||
session.close()
|
||||
username = ccnet_email
|
||||
except Exception as e:
|
||||
_logger.warning('Failed to login: %s', e)
|
||||
return False
|
||||
|
||||
try:
|
||||
user = self.ccnet_threaded_rpc.get_emailuser_with_import(username)
|
||||
if user.role == 'guest':
|
||||
environ['seafile.is_guest'] = True
|
||||
else:
|
||||
environ['seafile.is_guest'] = False
|
||||
except Exception as e:
|
||||
_logger.exception('get_emailuser')
|
||||
|
||||
if multi_tenancy_enabled():
|
||||
try:
|
||||
orgs = self.ccnet_threaded_rpc.get_orgs_by_user(username)
|
||||
if orgs:
|
||||
environ['seafile.org_id'] = orgs[0].org_id
|
||||
except Exception as e:
|
||||
_logger.exception('get_orgs_by_user')
|
||||
pass
|
||||
|
||||
environ["http_authenticator.username"] = username
|
||||
|
||||
return True
|
42
wsgidav/dc/seaf_utils.py
Normal file
42
wsgidav/dc/seaf_utils.py
Normal file
|
@ -0,0 +1,42 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import os
|
||||
import configparser
|
||||
import wsgidav.util as util
|
||||
|
||||
_logger = util.get_module_logger(__name__)
|
||||
|
||||
|
||||
def _load_path_from_env(key, check=True):
|
||||
v = os.environ.get(key, '')
|
||||
if not v:
|
||||
if check:
|
||||
raise ImportError(
|
||||
"seaf_util cannot be imported, because environment variable %s is undefined." % key)
|
||||
else:
|
||||
return None
|
||||
return os.path.normpath(os.path.expanduser(v))
|
||||
|
||||
CCNET_CONF_DIR = _load_path_from_env('CCNET_CONF_DIR')
|
||||
SEAFILE_CONF_DIR = _load_path_from_env('SEAFILE_CONF_DIR')
|
||||
SEAFILE_CENTRAL_CONF_DIR = _load_path_from_env(
|
||||
'SEAFILE_CENTRAL_CONF_DIR', check=False)
|
||||
|
||||
_multi_tenancy_enabled = None
|
||||
|
||||
|
||||
def multi_tenancy_enabled():
|
||||
global _multi_tenancy_enabled
|
||||
if _multi_tenancy_enabled is None:
|
||||
_multi_tenancy_enabled = False
|
||||
try:
|
||||
cp = configparser.ConfigParser()
|
||||
cp.read(
|
||||
os.path.join(SEAFILE_CENTRAL_CONF_DIR if SEAFILE_CENTRAL_CONF_DIR else SEAFILE_CONF_DIR, 'seafile.conf'))
|
||||
if cp.has_option('general', 'multi_tenancy'):
|
||||
_multi_tenancy_enabled = cp.getboolean(
|
||||
'general', 'multi_tenancy')
|
||||
except:
|
||||
_logger.exception('failed to read multi_tenancy')
|
||||
return _multi_tenancy_enabled
|
77
wsgidav/dc/seahub_db.py
Normal file
77
wsgidav/dc/seahub_db.py
Normal file
|
@ -0,0 +1,77 @@
|
|||
from urllib.parse import quote_plus
|
||||
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.event import contains as has_event_listener, listen as add_event_listener
|
||||
from sqlalchemy.exc import DisconnectionError
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
from sqlalchemy.pool import Pool
|
||||
from sqlalchemy.ext.automap import automap_base
|
||||
|
||||
Base = automap_base()
|
||||
|
||||
import wsgidav.util as util
|
||||
_logger = util.get_module_logger(__name__)
|
||||
|
||||
def init_db_session_class():
|
||||
try:
|
||||
_logger.info('Init seahub database...')
|
||||
engine = create_seahub_db_engine()
|
||||
Base.prepare(engine, reflect=True)
|
||||
Session = sessionmaker(bind=engine)
|
||||
return Session
|
||||
except Exception as e:
|
||||
_logger.warning('Failed to init seahub db: %s.', e)
|
||||
return None
|
||||
|
||||
def create_seahub_db_engine():
|
||||
import seahub_settings
|
||||
db_infos = seahub_settings.DATABASES['default']
|
||||
#import local_settings
|
||||
#db_infos = local_settings.DATABASES['default']
|
||||
|
||||
if db_infos.get('ENGINE') != 'django.db.backends.mysql':
|
||||
_logger.warning('Failed to init seahub db, only mysql db supported.')
|
||||
return
|
||||
|
||||
db_host = db_infos.get('HOST', '127.0.0.1')
|
||||
db_port = int(db_infos.get('PORT', '3306'))
|
||||
db_name = db_infos.get('NAME')
|
||||
if not db_name:
|
||||
_logger.warning ('Failed to init seahub db, db name is not set.')
|
||||
return
|
||||
db_user = db_infos.get('USER')
|
||||
if not db_user:
|
||||
_logger.warning ('Failed to init seahub db, db user is not set.')
|
||||
return
|
||||
db_passwd = db_infos.get('PASSWORD')
|
||||
|
||||
db_url = "mysql+mysqldb://%s:%s@%s:%s/%s?charset=utf8" % (db_user, quote_plus(db_passwd), db_host, db_port, db_name)
|
||||
|
||||
# Add pool recycle, or mysql connection will be closed by mysqld if idle
|
||||
# for too long.
|
||||
kwargs = dict(pool_recycle=300, echo=False, echo_pool=False)
|
||||
|
||||
engine = create_engine(db_url, **kwargs)
|
||||
if not has_event_listener(Pool, 'checkout', ping_connection):
|
||||
# We use has_event_listener to double check in case we call create_engine
|
||||
# multipe times in the same process.
|
||||
add_event_listener(Pool, 'checkout', ping_connection)
|
||||
|
||||
return engine
|
||||
|
||||
# This is used to fix the problem of "MySQL has gone away" that happens when
|
||||
# mysql server is restarted or the pooled connections are closed by the mysql
|
||||
# server beacause being idle for too long.
|
||||
#
|
||||
# See http://stackoverflow.com/a/17791117/1467959
|
||||
def ping_connection(dbapi_connection, connection_record, connection_proxy): # pylint: disable=unused-argument
|
||||
cursor = dbapi_connection.cursor()
|
||||
try:
|
||||
cursor.execute("SELECT 1")
|
||||
cursor.close()
|
||||
except:
|
||||
logger.info('fail to ping database server, disposing all cached connections')
|
||||
connection_proxy._pool.dispose() # pylint: disable=protected-access
|
||||
|
||||
# Raise DisconnectionError so the pool would create a new connection
|
||||
raise DisconnectionError()
|
716
wsgidav/seafile_dav_provider.py
Normal file
716
wsgidav/seafile_dav_provider.py
Normal file
|
@ -0,0 +1,716 @@
|
|||
from wsgidav.dav_error import DAVError, HTTP_BAD_REQUEST, HTTP_FORBIDDEN, \
|
||||
HTTP_NOT_FOUND, HTTP_INTERNAL_ERROR
|
||||
from wsgidav.dav_provider import DAVProvider, DAVCollection, DAVNonCollection
|
||||
|
||||
import wsgidav.util as util
|
||||
import os
|
||||
import posixpath
|
||||
|
||||
import tempfile
|
||||
|
||||
from seaserv import seafile_api, CALC_SHARE_USAGE
|
||||
from pysearpc import SearpcError
|
||||
from seafobj import commit_mgr, fs_mgr
|
||||
from seafobj.fs import SeafFile, SeafDir
|
||||
from wsgidav.dc.seaf_utils import SEAFILE_CONF_DIR
|
||||
|
||||
__docformat__ = "reStructuredText"
|
||||
|
||||
_logger = util.get_module_logger(__name__)
|
||||
|
||||
NEED_PROGRESS = 0
|
||||
SYNCHRONOUS = 1
|
||||
|
||||
INFINITE_QUOTA = -2
|
||||
|
||||
def sort_repo_list(repos):
|
||||
return sorted(repos, key = lambda r: r.id)
|
||||
|
||||
#===============================================================================
|
||||
# SeafileResource
|
||||
#===============================================================================
|
||||
class SeafileResource(DAVNonCollection):
|
||||
def __init__(self, path, repo, rel_path, obj, environ):
|
||||
super(SeafileResource, self).__init__(path, environ)
|
||||
self.repo = repo
|
||||
self.rel_path = rel_path
|
||||
self.obj = obj
|
||||
self.username = environ.get("http_authenticator.username", "")
|
||||
self.org_id = environ.get("seafile.org_id", "")
|
||||
self.is_guest = environ.get("seafile.is_guest", False)
|
||||
self.tmpfile_path = None
|
||||
self.owner = None
|
||||
|
||||
# Getter methods for standard live properties
|
||||
def get_content_length(self):
|
||||
return self.obj.size
|
||||
def get_content_type(self):
|
||||
# (mimetype, _mimeencoding) = mimetypes.guess_type(self.path)
|
||||
# print "mimetype(%s): %r, %r" % (self.path, mimetype, _mimeencoding)
|
||||
# if not mimetype:
|
||||
# mimetype = "application/octet-stream"
|
||||
# print "mimetype(%s): return %r" % (self.path, mimetype)
|
||||
# return mimetype
|
||||
return util.guess_mime_type(self.path)
|
||||
def get_creation_date(self):
|
||||
# return int(time.time())
|
||||
return None
|
||||
def get_display_name(self):
|
||||
return self.name
|
||||
def get_etag(self):
|
||||
return self.obj.obj_id
|
||||
|
||||
def get_last_modified(self):
|
||||
cached_mtime = getattr(self.obj, 'last_modified', None)
|
||||
if cached_mtime:
|
||||
return cached_mtime
|
||||
|
||||
if self.obj.mtime > 0:
|
||||
return self.obj.mtime
|
||||
|
||||
# XXX: What about not return last modified for files in v0 repos,
|
||||
# since they can be too expensive sometimes?
|
||||
parent, filename = os.path.split(self.rel_path)
|
||||
mtimes = seafile_api.get_files_last_modified(self.repo.id, parent, -1)
|
||||
for mtime in mtimes:
|
||||
if (mtime.file_name == filename):
|
||||
return mtime.last_modified
|
||||
|
||||
return None
|
||||
|
||||
def support_etag(self):
|
||||
return True
|
||||
def support_ranges(self):
|
||||
return False
|
||||
|
||||
def get_content(self):
|
||||
"""Open content as a stream for reading.
|
||||
|
||||
See DAVResource.getContent()
|
||||
"""
|
||||
assert not self.is_collection
|
||||
return self.obj.get_stream()
|
||||
|
||||
def check_repo_owner_quota(self, isnewfile=True, contentlength=-1):
|
||||
"""Check if the upload would cause the user quota be exceeded
|
||||
|
||||
`contentlength` is only positive when the client does not use "transfer-encode: chunking"
|
||||
|
||||
Return True if the quota would not be exceeded, otherwise return False.
|
||||
"""
|
||||
if contentlength <= 0:
|
||||
# When client use "transfer-encode: chunking", the content length
|
||||
# is not included in the request headers
|
||||
if isnewfile:
|
||||
return seafile_api.check_quota(self.repo.id) >= 0
|
||||
else:
|
||||
return True
|
||||
else:
|
||||
delta = contentlength - self.obj.size
|
||||
return seafile_api.check_quota(self.repo.id, delta) >= 0
|
||||
|
||||
def begin_write(self, content_type=None, isnewfile=True, contentlength=-1):
|
||||
"""Open content as a stream for writing.
|
||||
|
||||
See DAVResource.beginWrite()
|
||||
"""
|
||||
assert not self.is_collection
|
||||
if self.provider.readonly:
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
if seafile_api.check_permission_by_path(self.repo.id, self.rel_path, self.username) != "rw":
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
if not self.check_repo_owner_quota(isnewfile, contentlength):
|
||||
raise DAVError(HTTP_FORBIDDEN, "The quota of the repo owner is exceeded")
|
||||
|
||||
fd, path = tempfile.mkstemp(dir=self.provider.tmpdir)
|
||||
self.tmpfile_path = path
|
||||
return os.fdopen(fd, "wb")
|
||||
|
||||
def end_write(self, with_errors, isnewfile=True):
|
||||
if not with_errors:
|
||||
parent, filename = os.path.split(self.rel_path)
|
||||
contentlength = os.stat(self.tmpfile_path).st_size
|
||||
if not self.check_repo_owner_quota(isnewfile=isnewfile, contentlength=contentlength):
|
||||
raise DAVError(HTTP_FORBIDDEN, "The quota of the repo owner is exceeded")
|
||||
seafile_api.put_file(self.repo.id, self.tmpfile_path, parent, filename,
|
||||
self.username, None)
|
||||
if self.tmpfile_path:
|
||||
try:
|
||||
os.unlink(self.tmpfile_path)
|
||||
finally:
|
||||
self.tmpfile_path = None
|
||||
|
||||
def handle_delete(self):
|
||||
if self.provider.readonly:
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
if seafile_api.check_permission_by_path(self.repo.id, self.rel_path, self.username) != "rw":
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
parent, filename = os.path.split(self.rel_path)
|
||||
seafile_api.del_file(self.repo.id, parent, filename, self.username)
|
||||
|
||||
return True
|
||||
|
||||
def handle_move(self, dest_path):
|
||||
if self.provider.readonly:
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
parts = dest_path.strip("/").split("/", 1)
|
||||
if len(parts) <= 1:
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
repo_name = parts[0]
|
||||
rel_path = parts[1]
|
||||
|
||||
dest_dir, dest_file = os.path.split(rel_path)
|
||||
dest_repo = getRepoByName(repo_name, self.username, self.org_id, self.is_guest)
|
||||
|
||||
if seafile_api.check_permission_by_path(dest_repo.id, self.rel_path, self.username) != "rw":
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
src_dir, src_file = os.path.split(self.rel_path)
|
||||
if not src_file:
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
|
||||
if not seafile_api.is_valid_filename(dest_repo.id, dest_file):
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
|
||||
# some clients such as GoodReader requires "overwrite" semantics
|
||||
file_id_dest = seafile_api.get_file_id_by_path(dest_repo.id, rel_path)
|
||||
if file_id_dest != None:
|
||||
seafile_api.del_file(dest_repo.id, dest_dir, dest_file, self.username)
|
||||
|
||||
seafile_api.move_file(self.repo.id, src_dir, src_file,
|
||||
dest_repo.id, dest_dir, dest_file, 1, self.username, NEED_PROGRESS, SYNCHRONOUS)
|
||||
|
||||
return True
|
||||
|
||||
def handle_copy(self, dest_path, depth_infinity):
|
||||
if self.provider.readonly:
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
parts = dest_path.strip("/").split("/", 1)
|
||||
if len(parts) <= 1:
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
repo_name = parts[0]
|
||||
rel_path = parts[1]
|
||||
|
||||
dest_dir, dest_file = os.path.split(rel_path)
|
||||
dest_repo = getRepoByName(repo_name, self.username, self.org_id, self.is_guest)
|
||||
|
||||
if seafile_api.check_permission_by_path(dest_repo.id, self.rel_path, self.username) != "rw":
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
src_dir, src_file = os.path.split(self.rel_path)
|
||||
if not src_file:
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
|
||||
if not seafile_api.is_valid_filename(dest_repo.id, dest_file):
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
|
||||
seafile_api.copy_file(self.repo.id, src_dir, src_file,
|
||||
dest_repo.id, dest_dir, dest_file, self.username, NEED_PROGRESS, SYNCHRONOUS)
|
||||
|
||||
return True
|
||||
|
||||
#===============================================================================
|
||||
# SeafDirResource
|
||||
#===============================================================================
|
||||
class SeafDirResource(DAVCollection):
|
||||
def __init__(self, path, repo, rel_path, obj, environ):
|
||||
super(SeafDirResource, self).__init__(path, environ)
|
||||
self.repo = repo
|
||||
self.rel_path = rel_path
|
||||
self.obj = obj
|
||||
self.username = environ.get("http_authenticator.username", "")
|
||||
self.org_id = environ.get("seafile.org_id", "")
|
||||
self.is_guest = environ.get("seafile.is_guest", False)
|
||||
|
||||
# Getter methods for standard live properties
|
||||
def get_creation_date(self):
|
||||
# return int(time.time())
|
||||
return None
|
||||
def get_display_name(self):
|
||||
return self.name
|
||||
def get_directory_info(self):
|
||||
return None
|
||||
def get_etag(self):
|
||||
return self.obj.obj_id
|
||||
def get_last_modified(self):
|
||||
# return int(time.time())
|
||||
return None
|
||||
|
||||
def get_member_names(self):
|
||||
namelist = []
|
||||
for e in self.obj.dirs:
|
||||
namelist.append(e[0])
|
||||
for e in self.obj.files:
|
||||
namelist.append(e[0])
|
||||
return namelist
|
||||
|
||||
def get_member(self, name):
|
||||
member_rel_path = "/".join([self.rel_path, name])
|
||||
member_path = "/".join([self.path, name])
|
||||
member = self.obj.lookup(name)
|
||||
|
||||
if not member:
|
||||
raise DAVError(HTTP_NOT_FOUND)
|
||||
|
||||
if isinstance(member, SeafFile):
|
||||
return SeafileResource(member_path, self.repo, member_rel_path, member, self.environ)
|
||||
else:
|
||||
return SeafDirResource(member_path, self.repo, member_rel_path, member, self.environ)
|
||||
|
||||
def get_member_list(self):
|
||||
member_list = []
|
||||
d = self.obj
|
||||
|
||||
if d.version == 0:
|
||||
file_mtimes = []
|
||||
try:
|
||||
file_mtimes = seafile_api.get_files_last_modified(self.repo.id, self.rel_path, -1)
|
||||
except:
|
||||
raise DAVError(HTTP_INTERNAL_ERROR)
|
||||
|
||||
mtimes = {}
|
||||
for entry in file_mtimes:
|
||||
mtimes[entry.file_name] = entry.last_modified
|
||||
for name, dent in d.dirents.items():
|
||||
member_path = posixpath.join(self.path, name)
|
||||
member_rel_path = posixpath.join(self.rel_path, name)
|
||||
|
||||
if dent.is_dir():
|
||||
obj = fs_mgr.load_seafdir(d.store_id, d.version, dent.id)
|
||||
res = SeafDirResource(member_path, self.repo, member_rel_path, obj, self.environ)
|
||||
elif dent.is_file():
|
||||
obj = fs_mgr.load_seafile(d.store_id, d.version, dent.id)
|
||||
res = SeafileResource(member_path, self.repo, member_rel_path, obj, self.environ)
|
||||
else:
|
||||
continue
|
||||
|
||||
if d.version == 1:
|
||||
obj.last_modified = dent.mtime
|
||||
else:
|
||||
obj.last_modified = mtimes[name]
|
||||
|
||||
member_list.append(res)
|
||||
|
||||
return member_list
|
||||
|
||||
# --- Read / write ---------------------------------------------------------
|
||||
def create_empty_resource(self, name):
|
||||
"""Create an empty (length-0) resource.
|
||||
|
||||
See DAVResource.createEmptyResource()
|
||||
"""
|
||||
assert not "/" in name
|
||||
if self.provider.readonly:
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
if seafile_api.check_permission_by_path(self.repo.id, self.rel_path, self.username) != "rw":
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
if seafile_api.check_quota(self.repo.id) < 0:
|
||||
raise DAVError(HTTP_FORBIDDEN, "The quota of the repo owner is exceeded")
|
||||
|
||||
try:
|
||||
seafile_api.post_empty_file(self.repo.id, self.rel_path, name, self.username)
|
||||
except SearpcError as e:
|
||||
if e.msg == 'Invalid file name':
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
raise
|
||||
|
||||
# Repo was updated, can't use self.repo
|
||||
repo = seafile_api.get_repo(self.repo.id)
|
||||
if not repo:
|
||||
raise DAVError(HTTP_INTERNAL_ERROR)
|
||||
|
||||
member_rel_path = "/".join([self.rel_path, name])
|
||||
member_path = "/".join([self.path, name])
|
||||
obj = resolveRepoPath(repo, member_rel_path)
|
||||
if not obj or not isinstance(obj, SeafFile):
|
||||
raise DAVError(HTTP_INTERNAL_ERROR)
|
||||
|
||||
return SeafileResource(member_path, repo, member_rel_path, obj, self.environ)
|
||||
|
||||
def create_collection(self, name):
|
||||
"""Create a new collection as member of self.
|
||||
|
||||
See DAVResource.createCollection()
|
||||
"""
|
||||
assert not "/" in name
|
||||
if self.provider.readonly:
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
if seafile_api.check_permission_by_path(self.repo.id, self.rel_path, self.username) != "rw":
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
if not seafile_api.is_valid_filename(self.repo.id, name):
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
|
||||
seafile_api.post_dir(self.repo.id, self.rel_path, name, self.username)
|
||||
|
||||
def handle_delete(self):
|
||||
if self.provider.readonly:
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
if seafile_api.check_permission_by_path(self.repo.id, self.rel_path, self.username) != "rw":
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
parent, filename = os.path.split(self.rel_path)
|
||||
# Can't delete repo root
|
||||
if not filename:
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
|
||||
seafile_api.del_file(self.repo.id, parent, filename, self.username)
|
||||
|
||||
return True
|
||||
|
||||
def handle_move(self, dest_path):
|
||||
if self.provider.readonly:
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
parts = dest_path.strip("/").split("/", 1)
|
||||
if len(parts) <= 1:
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
repo_name = parts[0]
|
||||
rel_path = parts[1]
|
||||
|
||||
dest_dir, dest_file = os.path.split(rel_path)
|
||||
dest_repo = getRepoByName(repo_name, self.username, self.org_id, self.is_guest)
|
||||
|
||||
if seafile_api.check_permission_by_path(dest_repo.id, self.rel_path, self.username) != "rw":
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
src_dir, src_file = os.path.split(self.rel_path)
|
||||
if not src_file:
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
|
||||
if not seafile_api.is_valid_filename(dest_repo.id, dest_file):
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
|
||||
seafile_api.move_file(self.repo.id, src_dir, src_file,
|
||||
dest_repo.id, dest_dir, dest_file, 0, self.username, NEED_PROGRESS, SYNCHRONOUS)
|
||||
|
||||
return True
|
||||
|
||||
def handle_copy(self, dest_path, depth_infinity):
|
||||
if self.provider.readonly:
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
parts = dest_path.strip("/").split("/", 1)
|
||||
if len(parts) <= 1:
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
repo_name = parts[0]
|
||||
rel_path = parts[1]
|
||||
|
||||
dest_dir, dest_file = os.path.split(rel_path)
|
||||
dest_repo = getRepoByName(repo_name, self.username, self.org_id, self.is_guest)
|
||||
|
||||
if seafile_api.check_permission_by_path(dest_repo.id, self.rel_path, self.username) != "rw":
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
src_dir, src_file = os.path.split(self.rel_path)
|
||||
if not src_file:
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
|
||||
if not seafile_api.is_valid_filename(dest_repo.id, dest_file):
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
|
||||
seafile_api.copy_file(self.repo.id, src_dir, src_file,
|
||||
dest_repo.id, dest_dir, dest_file, self.username, NEED_PROGRESS, SYNCHRONOUS)
|
||||
|
||||
return True
|
||||
|
||||
class RootResource(DAVCollection):
|
||||
def __init__(self, username, environ):
|
||||
super(RootResource, self).__init__("/", environ)
|
||||
self.username = username
|
||||
self.org_id = environ.get('seafile.org_id', '')
|
||||
self.is_guest = environ.get('seafile.is_guest', False)
|
||||
|
||||
# Getter methods for standard live properties
|
||||
def get_creation_date(self):
|
||||
# return int(time.time())
|
||||
return None
|
||||
def get_display_name(self):
|
||||
return ""
|
||||
def get_directory_info(self):
|
||||
return None
|
||||
def get_etag(self):
|
||||
return None
|
||||
def getLastModified(self):
|
||||
# return int(time.time())
|
||||
return None
|
||||
|
||||
def get_member_names(self):
|
||||
all_repos = getAccessibleRepos(self.username, self.org_id, self.is_guest)
|
||||
|
||||
name_hash = {}
|
||||
for r in all_repos:
|
||||
r_list = name_hash[r.name]
|
||||
if not r_list:
|
||||
name_hash[r.name] = [r]
|
||||
else:
|
||||
r_list.append(r)
|
||||
|
||||
namelist = []
|
||||
for r_list in name_hash.values():
|
||||
if len(r_list) == 1:
|
||||
repo = r_list[0]
|
||||
namelist.append(repo.name)
|
||||
else:
|
||||
for repo in sort_repo_list(r_list):
|
||||
unique_name = repo.name + "-" + repo.id[:6]
|
||||
namelist.append(unique_name)
|
||||
|
||||
return namelist
|
||||
|
||||
def get_member(self, name):
|
||||
repo = getRepoByName(name, self.username, self.org_id, self.is_guest)
|
||||
return self._createRootRes(repo, name)
|
||||
|
||||
def get_member_list(self):
|
||||
"""
|
||||
Overwrite this method for better performance.
|
||||
The default implementation call getMemberNames() then call getMember()
|
||||
for each name. This calls getAccessibleRepos() for too many times.
|
||||
"""
|
||||
all_repos = getAccessibleRepos(self.username, self.org_id, self.is_guest)
|
||||
|
||||
name_hash = {}
|
||||
for r in all_repos:
|
||||
r_list = name_hash.get(r.name, [])
|
||||
if not r_list:
|
||||
name_hash[r.name] = [r]
|
||||
else:
|
||||
r_list.append(r)
|
||||
|
||||
member_list = []
|
||||
for r_list in name_hash.values():
|
||||
if len(r_list) == 1:
|
||||
repo = r_list[0]
|
||||
res = self._createRootRes(repo, repo.name)
|
||||
member_list.append(res)
|
||||
else:
|
||||
for repo in sort_repo_list(r_list):
|
||||
unique_name = repo.name + "-" + repo.id[:6]
|
||||
res = self._createRootRes(repo, unique_name)
|
||||
member_list.append(res)
|
||||
|
||||
return member_list
|
||||
|
||||
def _createRootRes(self, repo, name):
|
||||
obj = get_repo_root_seafdir(repo)
|
||||
return SeafDirResource("/"+name, repo, "", obj, self.environ)
|
||||
|
||||
# --- Read / write ---------------------------------------------------------
|
||||
|
||||
def create_empty_resource(self, name):
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
def create_collection(self, name):
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
def handle_delete(self):
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
def handle_move(self, dest_path):
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
def handle_copy(self, dest_path, depth_infinity):
|
||||
raise DAVError(HTTP_FORBIDDEN)
|
||||
|
||||
|
||||
#===============================================================================
|
||||
# SeafileProvider
|
||||
#===============================================================================
|
||||
class SeafileProvider(DAVProvider):
|
||||
|
||||
def __init__(self, readonly=False):
|
||||
super(SeafileProvider, self).__init__()
|
||||
self.readonly = readonly
|
||||
self.tmpdir = os.path.join(SEAFILE_CONF_DIR, "webdavtmp")
|
||||
if not os.access(self.tmpdir, os.F_OK):
|
||||
os.mkdir(self.tmpdir)
|
||||
|
||||
def __repr__(self):
|
||||
rw = "Read-Write"
|
||||
if self.readonly:
|
||||
rw = "Read-Only"
|
||||
return "%s for Seafile (%s)" % (self.__class__.__name__, rw)
|
||||
|
||||
|
||||
def get_resource_inst(self, path, environ):
|
||||
"""Return info dictionary for path.
|
||||
|
||||
See DAVProvider.getResourceInst()
|
||||
"""
|
||||
self._count_get_resource_inst += 1
|
||||
|
||||
username = environ.get("http_authenticator.username", "")
|
||||
org_id = environ.get("seafile.org_id", "")
|
||||
is_guest = environ.get("seafile.is_guest", False)
|
||||
|
||||
if path == "/" or path == "":
|
||||
return RootResource(username, environ)
|
||||
|
||||
path = path.rstrip("/")
|
||||
try:
|
||||
repo, rel_path, obj = resolvePath(path, username, org_id, is_guest)
|
||||
except DAVError as e:
|
||||
if e.value == HTTP_NOT_FOUND:
|
||||
return None
|
||||
raise
|
||||
|
||||
if isinstance(obj, SeafDir):
|
||||
return SeafDirResource(path, repo, rel_path, obj, environ)
|
||||
return SeafileResource(path, repo, rel_path, obj, environ)
|
||||
|
||||
def resolvePath(path, username, org_id, is_guest):
|
||||
segments = path.strip("/").split("/")
|
||||
if len(segments) == 0:
|
||||
raise DAVError(HTTP_BAD_REQUEST)
|
||||
repo_name = segments.pop(0)
|
||||
|
||||
repo = getRepoByName(repo_name, username, org_id, is_guest)
|
||||
|
||||
rel_path = ""
|
||||
obj = get_repo_root_seafdir(repo)
|
||||
|
||||
n_segs = len(segments)
|
||||
i = 0
|
||||
parent = None
|
||||
for segment in segments:
|
||||
parent = obj
|
||||
obj = parent.lookup(segment)
|
||||
|
||||
if not obj or (isinstance(obj, SeafFile) and i != n_segs-1):
|
||||
raise DAVError(HTTP_NOT_FOUND)
|
||||
|
||||
rel_path += "/" + segment
|
||||
i += 1
|
||||
|
||||
if parent:
|
||||
obj.mtime = parent.lookup_dent(segment).mtime
|
||||
|
||||
return (repo, rel_path, obj)
|
||||
|
||||
def resolveRepoPath(repo, path):
|
||||
segments = path.strip("/").split("/")
|
||||
|
||||
obj = get_repo_root_seafdir(repo)
|
||||
|
||||
n_segs = len(segments)
|
||||
i = 0
|
||||
for segment in segments:
|
||||
obj = obj.lookup(segment)
|
||||
|
||||
if not obj or (isinstance(obj, SeafFile) and i != n_segs-1):
|
||||
return None
|
||||
|
||||
i += 1
|
||||
|
||||
return obj
|
||||
|
||||
def get_repo_root_seafdir(repo):
|
||||
root_id = commit_mgr.get_commit_root_id(repo.id, repo.version, repo.head_cmmt_id)
|
||||
return fs_mgr.load_seafdir(repo.store_id, repo.version, root_id)
|
||||
|
||||
def getRepoByName(repo_name, username, org_id, is_guest):
|
||||
repos = getAccessibleRepos(username, org_id, is_guest)
|
||||
|
||||
ret_repo = None
|
||||
for repo in repos:
|
||||
if repo.name == repo_name:
|
||||
ret_repo = repo
|
||||
break
|
||||
|
||||
if not ret_repo:
|
||||
for repo in repos:
|
||||
if repo.name + "-" + repo.id[:6] == repo_name:
|
||||
ret_repo = repo
|
||||
break
|
||||
if not ret_repo:
|
||||
raise DAVError(HTTP_NOT_FOUND)
|
||||
|
||||
return ret_repo
|
||||
|
||||
def getAccessibleRepos(username, org_id, is_guest):
|
||||
all_repos = {}
|
||||
|
||||
def addRepo(repo):
|
||||
if all_repos.get(repo.repo_id):
|
||||
return
|
||||
if not repo.encrypted:
|
||||
all_repos[repo.repo_id] = repo
|
||||
|
||||
try:
|
||||
owned_repos = get_owned_repos(username, org_id)
|
||||
except SearpcError as e:
|
||||
util.warn("Failed to list owned repos: %s" % e.msg)
|
||||
|
||||
for orepo in owned_repos:
|
||||
if orepo:
|
||||
# store_id is used by seafobj to access fs object.
|
||||
# repo's store_id is equal to repo_id except virtual_repo.
|
||||
orepo.store_id = orepo.repo_id
|
||||
addRepo(orepo)
|
||||
|
||||
try:
|
||||
shared_repos = get_share_in_repo_list(username, org_id)
|
||||
except SearpcError as e:
|
||||
util.warn("Failed to list shared repos: %s" % e.msg)
|
||||
|
||||
for srepo in shared_repos:
|
||||
if srepo:
|
||||
addRepo(srepo)
|
||||
pass
|
||||
|
||||
try:
|
||||
repos = get_group_repos(username, org_id)
|
||||
except SearpcError as e:
|
||||
util.warn("Failed to get groups for %s" % username)
|
||||
for grepo in repos:
|
||||
if grepo:
|
||||
addRepo(grepo)
|
||||
|
||||
for prepo in list_inner_pub_repos(username, org_id, is_guest):
|
||||
if prepo:
|
||||
addRepo(prepo)
|
||||
|
||||
return all_repos.values()
|
||||
|
||||
def get_group_repos(username, org_id):
|
||||
if org_id:
|
||||
return seafile_api.get_org_group_repos_by_user(username, org_id)
|
||||
else:
|
||||
return seafile_api.get_group_repos_by_user(username)
|
||||
|
||||
def get_owned_repos(username, org_id):
|
||||
if org_id:
|
||||
return seafile_api.get_org_owned_repo_list(org_id, username)
|
||||
else:
|
||||
return seafile_api.get_owned_repo_list(username)
|
||||
|
||||
def get_share_in_repo_list(username, org_id):
|
||||
"""List share in repos.
|
||||
"""
|
||||
if org_id:
|
||||
repo_list = seafile_api.get_org_share_in_repo_list(org_id, username,
|
||||
-1, -1)
|
||||
else:
|
||||
repo_list = seafile_api.get_share_in_repo_list(username, -1, -1)
|
||||
|
||||
return repo_list
|
||||
|
||||
def list_inner_pub_repos(username, org_id, is_guest):
|
||||
if is_guest:
|
||||
return []
|
||||
|
||||
if org_id:
|
||||
return seafile_api.list_org_inner_pub_repos(org_id)
|
||||
|
||||
return seafile_api.get_inner_pub_repo_list()
|
|
@ -49,6 +49,8 @@ from wsgidav.default_conf import DEFAULT_CONFIG, DEFAULT_VERBOSE
|
|||
from wsgidav.fs_dav_provider import FilesystemProvider
|
||||
from wsgidav.wsgidav_app import WsgiDAVApp
|
||||
from wsgidav.xml_tools import use_lxml
|
||||
from wsgidav.dc.domain_controller import SeafileDomainController
|
||||
from wsgidav.seafile_dav_provider import SeafileProvider
|
||||
|
||||
try:
|
||||
# Try pyjson5 first because it's faster than json5
|
||||
|
@ -179,6 +181,11 @@ See https://github.com/mar10/wsgidav for additional information.
|
|||
help="used by 'cheroot' server if SSL certificates are configured "
|
||||
"(default: builtin).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--pid",
|
||||
dest="pidfile",
|
||||
help="PID file path",
|
||||
)
|
||||
|
||||
qv_group = parser.add_mutually_exclusive_group()
|
||||
qv_group.add_argument(
|
||||
|
@ -276,6 +283,44 @@ See https://github.com/mar10/wsgidav for additional information.
|
|||
return cmdLineOpts, parser
|
||||
|
||||
|
||||
def _loadSeafileSettings(config):
|
||||
# Seafile cannot support digest auth, since plain text password is needed.
|
||||
config['http_authenticator'] = {
|
||||
'accept_basic': True,
|
||||
'accept_digest': False,
|
||||
'default_to_digest': False,
|
||||
'domain_controller': SeafileDomainController
|
||||
}
|
||||
# Load share_name from seafdav config file
|
||||
|
||||
# haiwen
|
||||
# - conf
|
||||
# - seafdav.conf
|
||||
|
||||
##### a sample seafdav.conf, we only care: "share_name"
|
||||
# [WEBDAV]
|
||||
# enabled = true
|
||||
# port = 8080
|
||||
# share_name = /seafdav
|
||||
##### a sample seafdav.conf
|
||||
|
||||
share_name = '/'
|
||||
|
||||
seafdav_conf = os.environ.get('SEAFDAV_CONF')
|
||||
if seafdav_conf and os.path.exists(seafdav_conf):
|
||||
import configparser
|
||||
cp = configparser.ConfigParser()
|
||||
cp.read(seafdav_conf)
|
||||
section_name = 'WEBDAV'
|
||||
|
||||
if cp.has_option(section_name, 'share_name'):
|
||||
share_name = cp.get(section_name, 'share_name')
|
||||
|
||||
# Setup provider mapping for Seafile. E.g. /seafdav -> seafile provider.
|
||||
provider_mapping = {}
|
||||
provider_mapping[share_name] = SeafileProvider()
|
||||
config['provider_mapping'] = provider_mapping
|
||||
|
||||
def _read_config_file(config_file, _verbose):
|
||||
"""Read configuration file options into a dictionary."""
|
||||
|
||||
|
@ -361,6 +406,12 @@ def _init_config():
|
|||
if not config["provider_mapping"]:
|
||||
parser.error("No DAV provider defined.")
|
||||
|
||||
_loadSeafileSettings(config)
|
||||
|
||||
pid_file = cli_opts.get("pidfile")
|
||||
if pid_file:
|
||||
pid_file = os.path.abspath(pid_file)
|
||||
config["pidfile"] = pid_file
|
||||
# Quick-configuration of DomainController
|
||||
auth = cli_opts.get("auth")
|
||||
auth_conf = util.get_dict_value(config, "http_authenticator", as_dict=True)
|
||||
|
@ -426,6 +477,34 @@ def _init_config():
|
|||
|
||||
return cli_opts, config
|
||||
|
||||
import gunicorn.app.base
|
||||
from gunicorn.six import iteritems
|
||||
|
||||
class GunicornApplication(gunicorn.app.base.BaseApplication):
|
||||
|
||||
def __init__(self, app, options=None):
|
||||
self.options = options or {}
|
||||
self.application = app
|
||||
super(GunicornApplication, self).__init__()
|
||||
|
||||
def load_config(self):
|
||||
config = dict([(key, value) for key, value in iteritems(self.options)
|
||||
if key in self.cfg.settings and value is not None])
|
||||
for key, value in iteritems(config):
|
||||
self.cfg.set(key.lower(), value)
|
||||
|
||||
def load(self):
|
||||
return self.application
|
||||
|
||||
def _run_gunicorn(app, config, mode):
|
||||
options = {
|
||||
'bind': '%s:%s' % (config.get('host'), config.get('port')),
|
||||
'workers': 5,
|
||||
"pidfile": config.get('pidfile'),
|
||||
}
|
||||
|
||||
GunicornApplication(app, options).run()
|
||||
|
||||
|
||||
def _run_cheroot(app, config, _server):
|
||||
"""Run WsgiDAV using cheroot.server (https://cheroot.cherrypy.dev/)."""
|
||||
|
@ -749,6 +828,9 @@ def _run_wsgiref(app, config, _server):
|
|||
|
||||
|
||||
SUPPORTED_SERVERS = {
|
||||
"gunicorn": _run_gunicorn,
|
||||
"paste": _run_paste,
|
||||
"gevent": _run_gevent,
|
||||
"cheroot": _run_cheroot,
|
||||
"ext-wsgiutils": _run_ext_wsgiutils,
|
||||
"gevent": _run_gevent,
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue