Initial commit of onedrive-v2.5.0-alpha-5

* Initial commit of onedrive-v2.5.0-alpha-5
This commit is contained in:
abraunegg 2024-01-09 09:13:17 +11:00
parent 1a88d33be3
commit 48a803aa46
36 changed files with 15201 additions and 13383 deletions

View file

@ -2,6 +2,13 @@
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
## 2.5.0 - TBA
### Changed
* Renamed various documentation files to align with document content
## 2.4.25 - 2023-06-21
### Fixed
* Fixed that the application was reporting as v2.2.24 when in fact it was v2.4.24 (release tagging issue)

View file

@ -55,7 +55,7 @@ endif
system_unit_files = contrib/systemd/onedrive@.service
user_unit_files = contrib/systemd/onedrive.service
DOCFILES = README.md config LICENSE CHANGELOG.md docs/Docker.md docs/INSTALL.md docs/SharePoint-Shared-Libraries.md docs/USAGE.md docs/BusinessSharedFolders.md docs/advanced-usage.md docs/application-security.md
DOCFILES = readme.md config LICENSE changelog.md docs/advanced-usage.md docs/application-config-options.md docs/application-security.md docs/business-shared-folders.md docs/docker.md docs/install.md docs/national-cloud-deployments.md docs/podman.md docs/privacy-policy.md docs/sharepoint-libraries.md docs/terms-of-service.md docs/ubuntu-package-install.md docs/usage.md
ifneq ("$(wildcard /etc/redhat-release)","")
RHEL = $(shell cat /etc/redhat-release | grep -E "(Red Hat Enterprise Linux|CentOS)" | wc -l)
@ -66,19 +66,18 @@ RHEL_VERSION = 0
endif
SOURCES = \
src/config.d \
src/itemdb.d \
src/log.d \
src/main.d \
src/monitor.d \
src/onedrive.d \
src/qxor.d \
src/selective.d \
src/sqlite.d \
src/sync.d \
src/upload.d \
src/config.d \
src/log.d \
src/util.d \
src/progress.d \
src/qxor.d \
src/curlEngine.d \
src/onedrive.d \
src/sync.d \
src/itemdb.d \
src/sqlite.d \
src/clientSideFiltering.d \
src/monitor.d \
src/arsd/cgi.d
ifeq ($(NOTIFICATIONS),yes)
@ -92,10 +91,9 @@ clean:
rm -rf autom4te.cache
rm -f config.log config.status
# also remove files generated via ./configure
# Remove files generated via ./configure
distclean: clean
rm -f Makefile contrib/pacman/PKGBUILD contrib/spec/onedrive.spec onedrive.1 \
$(system_unit_files) $(user_unit_files)
rm -f Makefile contrib/pacman/PKGBUILD contrib/spec/onedrive.spec onedrive.1 $(system_unit_files) $(user_unit_files)
onedrive: $(SOURCES)
if [ -f .git/HEAD ] ; then \

View file

@ -5,14 +5,17 @@
[![Build Docker Images](https://github.com/abraunegg/onedrive/actions/workflows/docker.yaml/badge.svg)](https://github.com/abraunegg/onedrive/actions/workflows/docker.yaml)
[![Docker Pulls](https://img.shields.io/docker/pulls/driveone/onedrive)](https://hub.docker.com/r/driveone/onedrive)
A free Microsoft OneDrive Client which supports OneDrive Personal, OneDrive for Business, OneDrive for Office365 and SharePoint.
Introducing a free Microsoft OneDrive Client that seamlessly supports OneDrive Personal, OneDrive for Business, OneDrive for Office365, and SharePoint Libraries.
This powerful and highly configurable client can run on all major Linux distributions, FreeBSD, or as a Docker container. It supports one-way and two-way sync capabilities and securely connects to Microsoft OneDrive services.
This robust and highly customisable client is compatible with all major Linux distributions and FreeBSD, and can also be deployed as a container using Docker or Podman. It offers both one-way and two-way synchronisation capabilities while ensuring a secure connection to Microsoft OneDrive services.
This client is a 'fork' of the [skilion](https://github.com/skilion/onedrive) client, which the developer has confirmed he has no desire to maintain or support the client ([reference](https://github.com/skilion/onedrive/issues/518#issuecomment-717604726)). This fork has been in active development since mid 2018.
Originally derived as a 'fork' from the [skilion](https://github.com/skilion/onedrive) client, it's worth noting that the developer of the original client has explicitly stated they have no intention of maintaining or supporting their work ([reference](https://github.com/skilion/onedrive/issues/518#issuecomment-717604726)).
This client represents a 100% re-imagining of the original work, addressing numerous notable bugs and issues while incorporating a significant array of new features. This client has been under active development since mid-2018.
## Features
* State caching
* Supports 'Client Side Filtering' rules to determine what should be synced with Microsoft OneDrive
* Sync State Caching
* Real-Time local file monitoring with inotify
* Real-Time syncing of remote updates via webhooks
* File upload / download validation to ensure data integrity
@ -26,6 +29,7 @@ This client is a 'fork' of the [skilion](https://github.com/skilion/onedrive) cl
* Support for National cloud deployments (Microsoft Cloud for US Government, Microsoft Cloud Germany, Azure and Office 365 operated by 21Vianet in China)
* Supports single & multi-tenanted applications
* Supports rate limiting of traffic
* Supports multi-threaded uploads and downloads
## What's missing
* Ability to encrypt/decrypt files on-the-fly when uploading/downloading files from OneDrive
@ -36,28 +40,17 @@ This client is a 'fork' of the [skilion](https://github.com/skilion/onedrive) cl
* Colorful log output terminal modification: [OneDrive Client for Linux Colorful log Output](https://github.com/zzzdeb/dotfiles/blob/master/scripts/tools/onedrive_log)
* System Tray Icon: [OneDrive Client for Linux System Tray Icon](https://github.com/DanielBorgesOliveira/onedrive_tray)
## Supported Application Version
Only the current application release version or greater is supported.
The current application release version is: [![Version](https://img.shields.io/github/v/release/abraunegg/onedrive)](https://github.com/abraunegg/onedrive/releases)
Check the version of the application you are using `onedrive --version` and ensure that you are running either the current release or compile the application yourself from master to get the latest version.
If you are not using the above application version or greater, you must upgrade your application to obtain support.
## Have a Question
If you have a question or need something clarified, please raise a new disscussion post [here](https://github.com/abraunegg/onedrive/discussions)
Be sure to review the Frequently Asked Questions as well before raising a new discussion post.
## Frequently Asked Questions
Refer to [Frequently Asked Questions](https://github.com/abraunegg/onedrive/wiki/Frequently-Asked-Questions)
## Reporting an Issue or Bug
If you encounter any bugs you can report them here on GitHub. Before filing an issue be sure to:
## Have a question
If you have a question or need something clarified, please raise a new disscussion post [here](https://github.com/abraunegg/onedrive/discussions)
1. Check the version of the application you are using `onedrive --version` and ensure that you are running a supported application version. If you are not using a supported application version, you must first upgrade your application to a supported version and then re-test for your issue.
2. If you are using a supported applcation version, fill in a new bug report using the [issue template](https://github.com/abraunegg/onedrive/issues/new?template=bug_report.md)
## Reporting an Issue or Bug
If you encounter any bugs you can report them here on Github. Before filing an issue be sure to:
1. Check the version of the application you are using `onedrive --version` and ensure that you are running either the latest [release](https://github.com/abraunegg/onedrive/releases) or built from master.
2. Fill in a new bug report using the [issue template](https://github.com/abraunegg/onedrive/issues/new?template=bug_report.md)
3. Generate a debug log for support using the following [process](https://github.com/abraunegg/onedrive/wiki/Generate-debug-log-for-support)
* If you are in *any* way concerned regarding the sensitivity of the data contained with in the verbose debug log file, create a new OneDrive account, configure the client to use that, use *dummy* data to simulate your environment and then replicate your original issue
* If you are still concerned, provide an NDA or confidentiality document to sign
@ -70,23 +63,23 @@ Refer to [docs/known-issues.md](https://github.com/abraunegg/onedrive/blob/maste
## Documentation and Configuration Assistance
### Installing from Distribution Packages or Building the OneDrive Client for Linux from source
Refer to [docs/INSTALL.md](https://github.com/abraunegg/onedrive/blob/master/docs/INSTALL.md)
Refer to [docs/install.md](https://github.com/abraunegg/onedrive/blob/master/docs/install.md)
### Configuration and Usage
Refer to [docs/USAGE.md](https://github.com/abraunegg/onedrive/blob/master/docs/USAGE.md)
Refer to [docs/usage.md](https://github.com/abraunegg/onedrive/blob/master/docs/usage.md)
### Configure OneDrive Business Shared Folders
Refer to [docs/BusinessSharedFolders.md](https://github.com/abraunegg/onedrive/blob/master/docs/BusinessSharedFolders.md)
Refer to [docs/business-shared-folders.md](https://github.com/abraunegg/onedrive/blob/master/docs/business-shared-folders.md)
### Configure SharePoint / Office 365 Shared Libraries (Business or Education)
Refer to [docs/SharePoint-Shared-Libraries.md](https://github.com/abraunegg/onedrive/blob/master/docs/SharePoint-Shared-Libraries.md)
Refer to [docs/sharepoint-libraries.md](https://github.com/abraunegg/onedrive/blob/master/docs/sharepoint-libraries.md)
### Configure National Cloud support
Refer to [docs/national-cloud-deployments.md](https://github.com/abraunegg/onedrive/blob/master/docs/national-cloud-deployments.md)
### Docker support
Refer to [docs/Docker.md](https://github.com/abraunegg/onedrive/blob/master/docs/Docker.md)
Refer to [docs/docker.md](https://github.com/abraunegg/onedrive/blob/master/docs/docker.md)
### Podman support
Refer to [docs/Podman.md](https://github.com/abraunegg/onedrive/blob/master/docs/Podman.md)
Refer to [docs/podman.md](https://github.com/abraunegg/onedrive/blob/master/docs/podman.md)

15
config
View file

@ -3,7 +3,7 @@
# with their default values.
# All values need to be enclosed in quotes
# When changing a config option below, remove the '#' from the start of the line
# For explanations of all config options below see docs/USAGE.md or the man page.
# For explanations of all config options below see docs/usage.md or the man page.
#
# sync_dir = "~/OneDrive"
# skip_file = "~*|.~*|*.tmp"
@ -40,22 +40,19 @@
# bypass_data_preservation = "false"
# azure_ad_endpoint = ""
# azure_tenant_id = "common"
# sync_business_shared_folders = "false"
# sync_business_shared_items = "false"
# sync_dir_permissions = "700"
# sync_file_permissions = "600"
# rate_limit = "131072"
# operation_timeout = "3600"
# webhook_enabled = "false"
# webhook_public_url = ""
# webhook_listening_host = ""
# webhook_listening_port = "8888"
# webhook_expiration_interval = "86400"
# webhook_renewal_interval = "43200"
# webhook_expiration_interval = "600"
# webhook_renewal_interval = "300"
# webhook_retry_interval = "60"
# space_reservation = "50"
# display_running_config = "false"
# read_only_auth_scope = "false"
# cleanup_local_files = "false"
# operation_timeout = "3600"
# dns_timeout = "60"
# connect_timeout = "10"
# data_timeout = "600"
# ip_protocol_version = "0"

20
configure vendored
View file

@ -1,6 +1,6 @@
#! /bin/sh
# Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.69 for onedrive v2.4.25.
# Generated by GNU Autoconf 2.69 for onedrive v2.5.0-alpha-5.
#
# Report bugs to <https://github.com/abraunegg/onedrive>.
#
@ -579,8 +579,8 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='onedrive'
PACKAGE_TARNAME='onedrive'
PACKAGE_VERSION='v2.4.25'
PACKAGE_STRING='onedrive v2.4.25'
PACKAGE_VERSION='v2.5.0-alpha-5'
PACKAGE_STRING='onedrive v2.5.0-alpha-5'
PACKAGE_BUGREPORT='https://github.com/abraunegg/onedrive'
PACKAGE_URL=''
@ -1219,7 +1219,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF
\`configure' configures onedrive v2.4.25 to adapt to many kinds of systems.
\`configure' configures onedrive v2.5.0-alpha-5 to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]...
@ -1280,7 +1280,7 @@ fi
if test -n "$ac_init_help"; then
case $ac_init_help in
short | recursive ) echo "Configuration of onedrive v2.4.25:";;
short | recursive ) echo "Configuration of onedrive v2.5.0-alpha-5:";;
esac
cat <<\_ACEOF
@ -1393,7 +1393,7 @@ fi
test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then
cat <<\_ACEOF
onedrive configure v2.4.25
onedrive configure v2.5.0-alpha-5
generated by GNU Autoconf 2.69
Copyright (C) 2012 Free Software Foundation, Inc.
@ -1410,7 +1410,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by onedrive $as_me v2.4.25, which was
It was created by onedrive $as_me v2.5.0-alpha-5, which was
generated by GNU Autoconf 2.69. Invocation command line was
$ $0 $@
@ -2162,7 +2162,7 @@ fi
PACKAGE_DATE="June 2023"
PACKAGE_DATE="January 2024"
@ -3159,7 +3159,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their
# values after options handling.
ac_log="
This file was extended by onedrive $as_me v2.4.25, which was
This file was extended by onedrive $as_me v2.5.0-alpha-5, which was
generated by GNU Autoconf 2.69. Invocation command line was
CONFIG_FILES = $CONFIG_FILES
@ -3212,7 +3212,7 @@ _ACEOF
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
ac_cs_version="\\
onedrive config.status v2.4.25
onedrive config.status v2.5.0-alpha-5
configured by $0, generated by GNU Autoconf 2.69,
with options \\"\$ac_cs_config\\"

View file

@ -9,7 +9,7 @@ dnl - commit the changed files (configure.ac, configure)
dnl - tag the release
AC_PREREQ([2.69])
AC_INIT([onedrive],[v2.4.25], [https://github.com/abraunegg/onedrive], [onedrive])
AC_INIT([onedrive],[v2.5.0-alpha-5], [https://github.com/abraunegg/onedrive], [onedrive])
AC_CONFIG_SRCDIR([src/main.d])

View file

@ -11,7 +11,7 @@ _onedrive()
prev=${COMP_WORDS[COMP_CWORD-1]}
options='--check-for-nomount --check-for-nosync --debug-https --disable-notifications --display-config --display-sync-status --download-only --disable-upload-validation --dry-run --enable-logging --force-http-1.1 --force-http-2 --get-file-link --local-first --logout -m --monitor --no-remote-delete --print-token --reauth --resync --skip-dot-files --skip-symlinks --synchronize --upload-only -v --verbose --version -h --help'
argopts='--create-directory --get-O365-drive-id --operation-timeout --remove-directory --single-directory --source-directory'
argopts='--create-directory --get-O365-drive-id --remove-directory --single-directory --source-directory'
# Loop on the arguments to manage conflicting options
for (( i=0; i < ${#COMP_WORDS[@]}-1; i++ )); do
@ -34,7 +34,7 @@ _onedrive()
fi
return 0
;;
--create-directory|--get-O365-drive-id|--operation-timeout|--remove-directory|--single-directory|--source-directory)
--create-directory|--get-O365-drive-id|--remove-directory|--single-directory|--source-directory)
return 0
;;
*)

View file

@ -23,7 +23,6 @@ complete -c onedrive -l local-first -d 'Synchronize from the local directory sou
complete -c onedrive -l logout -d 'Logout the current user.'
complete -c onedrive -n "not __fish_seen_subcommand_from --synchronize" -a "-m --monitor" -d 'Keep monitoring for local and remote changes.'
complete -c onedrive -l no-remote-delete -d 'Do not delete local file deletes from OneDrive when using --upload-only.'
complete -c onedrive -l operation-timeout -d 'Specify the maximum amount of time (in seconds) an operation is allowed to take.'
complete -c onedrive -l print-token -d 'Print the access token, useful for debugging.'
complete -c onedrive -l remote-directory -d 'Remove a directory on OneDrive - no sync will be performed.'
complete -c onedrive -l reauth -d 'Reauthenticate the client with OneDrive.'

View file

@ -27,7 +27,6 @@ all_opts=(
'--logout[Logout the current user]'
'(-m --monitor)'{-m,--monitor}'[Keep monitoring for local and remote changes]'
'--no-remote-delete[Do not delete local file deletes from OneDrive when using --upload-only]'
'--operation-timeout[Specify the maximum amount of time (in seconds) an operation is allowed to take.]:seconds:'
'--print-token[Print the access token, useful for debugging]'
'--reauth[Reauthenticate the client with OneDrive]'
'--resync[Forget the last saved state, perform a full sync]'

View file

@ -118,6 +118,13 @@ if [ -n "${ONEDRIVE_SINGLE_DIRECTORY:=""}" ]; then
ARGS=(--single-directory \"${ONEDRIVE_SINGLE_DIRECTORY}\" ${ARGS[@]})
fi
# Tell client run in dry-run mode
if [ "${ONEDRIVE_DRYRUN:=0}" == "1" ]; then
echo "# We are running in dry-run mode"
echo "# Adding --dry-run"
ARGS=(--dry-run ${ARGS[@]})
fi
if [ ${#} -gt 0 ]; then
ARGS=("${@}")
fi

View file

@ -1,192 +0,0 @@
# How to configure OneDrive Business Shared Folder Sync
## Application Version
Before reading this document, please ensure you are running application version [![Version](https://img.shields.io/github/v/release/abraunegg/onedrive)](https://github.com/abraunegg/onedrive/releases) or greater. Use `onedrive --version` to determine what application version you are using and upgrade your client if required.
## Process Overview
Syncing OneDrive Business Shared Folders requires additional configuration for your 'onedrive' client:
1. List available shared folders to determine which folder you wish to sync & to validate that you have access to that folder
2. Create a new file called 'business_shared_folders' in your config directory which contains a list of the shared folders you wish to sync
3. Test the configuration using '--dry-run'
4. Sync the OneDrive Business Shared folders as required
## Listing available OneDrive Business Shared Folders
List the available OneDrive Business Shared folders with the following command:
```text
onedrive --list-shared-folders
```
This will return a listing of all OneDrive Business Shared folders which have been shared with you and by whom. This is important for conflict resolution:
```text
Initializing the Synchronization Engine ...
Listing available OneDrive Business Shared Folders:
---------------------------------------
Shared Folder: SharedFolder0
Shared By: Firstname Lastname
---------------------------------------
Shared Folder: SharedFolder1
Shared By: Firstname Lastname
---------------------------------------
Shared Folder: SharedFolder2
Shared By: Firstname Lastname
---------------------------------------
Shared Folder: SharedFolder0
Shared By: Firstname Lastname (user@domain)
---------------------------------------
Shared Folder: SharedFolder1
Shared By: Firstname Lastname (user@domain)
---------------------------------------
Shared Folder: SharedFolder2
Shared By: Firstname Lastname (user@domain)
...
```
## Configuring OneDrive Business Shared Folders
1. Create a new file called 'business_shared_folders' in your config directory
2. On each new line, list the OneDrive Business Shared Folder you wish to sync
```text
[alex@centos7full onedrive]$ cat ~/.config/onedrive/business_shared_folders
# comment
Child Shared Folder
# Another comment
Top Level to Share
[alex@centos7full onedrive]$
```
3. Validate your configuration with `onedrive --display-config`:
```text
Configuration file successfully loaded
onedrive version = v2.4.3
Config path = /home/alex/.config/onedrive-business/
Config file found in config path = true
Config option 'check_nosync' = false
Config option 'sync_dir' = /home/alex/OneDriveBusiness
Config option 'skip_dir' =
Config option 'skip_file' = ~*|.~*|*.tmp
Config option 'skip_dotfiles' = false
Config option 'skip_symlinks' = false
Config option 'monitor_interval' = 300
Config option 'min_notify_changes' = 5
Config option 'log_dir' = /var/log/onedrive/
Config option 'classify_as_big_delete' = 1000
Config option 'sync_root_files' = false
Selective sync 'sync_list' configured = false
Business Shared Folders configured = true
business_shared_folders contents:
# comment
Child Shared Folder
# Another comment
Top Level to Share
```
## Performing a sync of OneDrive Business Shared Folders
Perform a standalone sync using the following command: `onedrive --synchronize --sync-shared-folders --verbose`:
```text
onedrive --synchronize --sync-shared-folders --verbose
Using 'user' Config Dir: /home/alex/.config/onedrive-business/
Using 'system' Config Dir:
Configuration file successfully loaded
Initializing the OneDrive API ...
Configuring Global Azure AD Endpoints
Opening the item database ...
All operations will be performed in: /home/alex/OneDriveBusiness
Application version: v2.4.3
Account Type: business
Default Drive ID: b!bO8V7s9SSk6r7mWHpIjURotN33W1W2tEv3OXV_oFIdQimEdOHR-1So7CqeT1MfHA
Default Root ID: 01WIXGO5V6Y2GOVW7725BZO354PWSELRRZ
Remaining Free Space: 1098316220277
Fetching details for OneDrive Root
OneDrive Root exists in the database
Initializing the Synchronization Engine ...
Syncing changes from OneDrive ...
Applying changes of Path ID: 01WIXGO5V6Y2GOVW7725BZO354PWSELRRZ
Number of items from OneDrive to process: 0
Attempting to sync OneDrive Business Shared Folders
Syncing this OneDrive Business Shared Folder: Child Shared Folder
OneDrive Business Shared Folder - Shared By: test user
Applying changes of Path ID: 01JRXHEZMREEB3EJVHNVHKNN454Q7DFXPR
Adding OneDrive root details for processing
Adding OneDrive folder details for processing
Adding 4 OneDrive items for processing from OneDrive folder
Adding 2 OneDrive items for processing from /Child Shared Folder/Cisco VDI Whitepaper
Adding 2 OneDrive items for processing from /Child Shared Folder/SMPP_Shared
Processing 11 OneDrive items to ensure consistent local state
Syncing this OneDrive Business Shared Folder: Top Level to Share
OneDrive Business Shared Folder - Shared By: test user (testuser@mynasau3.onmicrosoft.com)
Applying changes of Path ID: 01JRXHEZLRMXHKBYZNOBF3TQOPBXS3VZMA
Adding OneDrive root details for processing
Adding OneDrive folder details for processing
Adding 4 OneDrive items for processing from OneDrive folder
Adding 3 OneDrive items for processing from /Top Level to Share/10-Files
Adding 2 OneDrive items for processing from /Top Level to Share/10-Files/Cisco VDI Whitepaper
Adding 2 OneDrive items for processing from /Top Level to Share/10-Files/Images
Adding 8 OneDrive items for processing from /Top Level to Share/10-Files/Images/JPG
Adding 8 OneDrive items for processing from /Top Level to Share/10-Files/Images/PNG
Adding 2 OneDrive items for processing from /Top Level to Share/10-Files/SMPP
Processing 31 OneDrive items to ensure consistent local state
Uploading differences of ~/OneDriveBusiness
Processing root
The directory has not changed
Processing SMPP_Local
The directory has not changed
Processing SMPP-IF-SPEC_v3_3-24858.pdf
The file has not changed
Processing SMPP_v3_4_Issue1_2-24857.pdf
The file has not changed
Processing new_local_file.txt
The file has not changed
Processing root
The directory has not changed
...
The directory has not changed
Processing week02-03-Combinational_Logic-v1.pptx
The file has not changed
Uploading new items of ~/OneDriveBusiness
Applying changes of Path ID: 01WIXGO5V6Y2GOVW7725BZO354PWSELRRZ
Number of items from OneDrive to process: 0
Attempting to sync OneDrive Business Shared Folders
Syncing this OneDrive Business Shared Folder: Child Shared Folder
OneDrive Business Shared Folder - Shared By: test user
Applying changes of Path ID: 01JRXHEZMREEB3EJVHNVHKNN454Q7DFXPR
Adding OneDrive root details for processing
Adding OneDrive folder details for processing
Adding 4 OneDrive items for processing from OneDrive folder
Adding 2 OneDrive items for processing from /Child Shared Folder/Cisco VDI Whitepaper
Adding 2 OneDrive items for processing from /Child Shared Folder/SMPP_Shared
Processing 11 OneDrive items to ensure consistent local state
Syncing this OneDrive Business Shared Folder: Top Level to Share
OneDrive Business Shared Folder - Shared By: test user (testuser@mynasau3.onmicrosoft.com)
Applying changes of Path ID: 01JRXHEZLRMXHKBYZNOBF3TQOPBXS3VZMA
Adding OneDrive root details for processing
Adding OneDrive folder details for processing
Adding 4 OneDrive items for processing from OneDrive folder
Adding 3 OneDrive items for processing from /Top Level to Share/10-Files
Adding 2 OneDrive items for processing from /Top Level to Share/10-Files/Cisco VDI Whitepaper
Adding 2 OneDrive items for processing from /Top Level to Share/10-Files/Images
Adding 8 OneDrive items for processing from /Top Level to Share/10-Files/Images/JPG
Adding 8 OneDrive items for processing from /Top Level to Share/10-Files/Images/PNG
Adding 2 OneDrive items for processing from /Top Level to Share/10-Files/SMPP
Processing 31 OneDrive items to ensure consistent local state
```
**Note:** Whenever you modify the `business_shared_folders` file you must perform a `--resync` of your database to clean up stale entries due to changes in your configuration.
## Enable / Disable syncing of OneDrive Business Shared Folders
Performing a sync of the configured OneDrive Business Shared Folders can be enabled / disabled via adding the following to your configuration file.
### Enable syncing of OneDrive Business Shared Folders via config file
```text
sync_business_shared_folders = "true"
```
### Disable syncing of OneDrive Business Shared Folders via config file
```text
sync_business_shared_folders = "false"
```
## Known Issues
Shared folders, shared with you from people outside of your 'organisation' are unable to be synced. This is due to the Microsoft Graph API not presenting these folders.
Shared folders that match this scenario, when you view 'Shared' via OneDrive online, will have a 'world' symbol as per below:
![shared_with_me](./images/shared_with_me.JPG)
This issue is being tracked by: [#966](https://github.com/abraunegg/onedrive/issues/966)

View file

@ -228,7 +228,7 @@ docker volume inspect onedrive_conf
Or you can map your own config folder to the config volume. Make sure to copy all files from the docker volume into your mapped folder first.
The detailed document for the config can be found here: [Configuration](https://github.com/abraunegg/onedrive/blob/master/docs/USAGE.md#configuration)
The detailed document for the config can be found here: [Configuration](https://github.com/abraunegg/onedrive/blob/master/docs/usage.md#configuration)
### Syncing multiple accounts
There are many ways to do this, the easiest is probably to do the following:
@ -271,9 +271,10 @@ docker run $firstRun --restart unless-stopped --name onedrive -v onedrive_conf:/
| <B>ONEDRIVE_LOGOUT</B> | Controls "--logout" switch. Default is 0 | 1 |
| <B>ONEDRIVE_REAUTH</B> | Controls "--reauth" switch. Default is 0 | 1 |
| <B>ONEDRIVE_AUTHFILES</B> | Controls "--auth-files" option. Default is "" | "authUrl:responseUrl" |
| <B>ONEDRIVE_AUTHRESPONSE</B> | Controls "--auth-response" option. Default is "" | See [here](https://github.com/abraunegg/onedrive/blob/master/docs/USAGE.md#authorize-the-application-with-your-onedrive-account) |
| <B>ONEDRIVE_AUTHRESPONSE</B> | Controls "--auth-response" option. Default is "" | See [here](https://github.com/abraunegg/onedrive/blob/master/docs/usage.md#authorize-the-application-with-your-onedrive-account) |
| <B>ONEDRIVE_DISPLAY_CONFIG</B> | Controls "--display-running-config" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_SINGLE_DIRECTORY</B> | Controls "--single-directory" option. Default = "" | "mydir" |
| <B>ONEDRIVE_DRYRUN</B> | Controls "--dry-run" option. Default is 0 | 1 |
### Environment Variables Usage Examples
**Verbose Output:**

View file

@ -11,9 +11,10 @@ Distribution packages may be of an older release when compared to the latest rel
| Distribution | Package Name & Package Link | &nbsp;&nbsp;PKG_Version&nbsp;&nbsp; | &nbsp;i686&nbsp; | x86_64 | ARMHF | AARCH64 | Extra Details |
|---------------------------------|------------------------------------------------------------------------------|:---------------:|:----:|:------:|:-----:|:-------:|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Alpine Linux | [onedrive](https://pkgs.alpinelinux.org/packages?name=onedrive&branch=edge) |<a href="https://pkgs.alpinelinux.org/packages?name=onedrive&branch=edge"><img src="https://repology.org/badge/version-for-repo/alpine_edge/onedrive.svg?header=" alt="Alpine Linux Edge package" width="46" height="20"></a>|❌|✔|❌|✔ | |
| Arch Linux<br><br>Manjaro Linux | [onedrive-abraunegg](https://aur.archlinux.org/packages/onedrive-abraunegg/) |<a href="https://aur.archlinux.org/packages/onedrive-abraunegg"><img src="https://repology.org/badge/version-for-repo/aur/onedrive-abraunegg.svg?header=" alt="AUR package" width="46" height="20"></a>|✔|✔|✔|✔ | Install via: `pamac build onedrive-abraunegg` from the Arch Linux User Repository (AUR)<br><br>**Note:** If asked regarding a provider for 'd-runtime' and 'd-compiler', select 'liblphobos' and 'ldc'<br><br>**Note:** System must have at least 1GB of memory & 1GB swap space
| Arch Linux<br><br>Manjaro Linux | [onedrive-abraunegg](https://aur.archlinux.org/packages/onedrive-abraunegg/) |<a href="https://aur.archlinux.org/packages/onedrive-abraunegg"><img src="https://repology.org/badge/version-for-repo/aur/onedrive-abraunegg.svg?header=" alt="AUR package" width="46" height="20"></a>|✔|✔|✔|✔ | Install via: `pamac build onedrive-abraunegg` from the Arch Linux User Repository (AUR)<br><br>**Note:** You must first install 'base-devel' as this is a pre-requisite for using the AUR<br><br>**Note:** If asked regarding a provider for 'd-runtime' and 'd-compiler', select 'liblphobos' and 'ldc'<br><br>**Note:** System must have at least 1GB of memory & 1GB swap space
| Debian 11 | [onedrive](https://packages.debian.org/bullseye/source/onedrive) |<a href="https://packages.debian.org/bullseye/source/onedrive"><img src="https://repology.org/badge/version-for-repo/debian_11/onedrive.svg?header=" alt="Debian 11 package" width="46" height="20"></a>|✔|✔|✔|✔| **Note:** Do not install from Debian Package Repositories<br><br>It is recommended that for Debian 11 that you install from OpenSuSE Build Service using the Debian Package Install [Instructions](ubuntu-package-install.md) |
| Debian 12 | [onedrive](https://packages.debian.org/bookworm/source/onedrive) |<a href="https://packages.debian.org/bookworm/source/onedrive"><img src="https://repology.org/badge/version-for-repo/debian_12/onedrive.svg?header=" alt="Debian 12 package" width="46" height="20"></a>|✔|✔|✔|✔| **Note:** Do not install from Debian Package Repositories<br><br>It is recommended that for Debian 12 that you install from OpenSuSE Build Service using the Debian Package Install [Instructions](ubuntu-package-install.md) |
| Debian Sid | [onedrive](https://packages.debian.org/sid/onedrive) |<a href="https://packages.debian.org/sid/onedrive"><img src="https://repology.org/badge/version-for-repo/debian_unstable/onedrive.svg?header=" alt="Debian Sid package" width="46" height="20"></a>|✔|✔|✔|✔| |
| Fedora | [onedrive](https://koji.fedoraproject.org/koji/packageinfo?packageID=26044) |<a href="https://koji.fedoraproject.org/koji/packageinfo?packageID=26044"><img src="https://repology.org/badge/version-for-repo/fedora_rawhide/onedrive.svg?header=" alt="Fedora Rawhide package" width="46" height="20"></a>|✔|✔|✔|✔| |
| Gentoo | [onedrive](https://gpo.zugaina.org/net-misc/onedrive) | No API Available |✔|✔|❌|❌| |
| Homebrew | [onedrive](https://formulae.brew.sh/formula/onedrive) | <a href="https://formulae.brew.sh/formula/onedrive"><img src="https://repology.org/badge/version-for-repo/homebrew/onedrive.svg?header=" alt="Homebrew package" width="46" height="20"></a> |❌|✔|❌|❌| |
@ -211,8 +212,10 @@ sudo make install
```
### Build options
Notifications can be enabled using the `configure` switch `--enable-notifications`.
#### GUI Notification Support
GUI notification support can be enabled using the `configure` switch `--enable-notifications`.
#### systemd service directory customisation support
Systemd service files are installed in the appropriate directories on the system,
as provided by `pkg-config systemd` settings. If the need for overriding the
deduced path are necessary, the two options `--with-systemdsystemunitdir` (for
@ -220,9 +223,11 @@ the Systemd system unit location), and `--with-systemduserunitdir` (for the
Systemd user unit location) can be specified. Passing in `no` to one of these
options disabled service file installation.
#### Additional Compiler Debug
By passing `--enable-debug` to the `configure` call, `onedrive` gets built with additional debug
information, useful (for example) to get `perf`-issued figures.
#### Shell Completion Support
By passing `--enable-completions` to the `configure` call, shell completion functions are
installed for `bash`, `zsh` and `fish`. The installation directories are determined
as far as possible automatically, but can be overridden by passing

View file

@ -255,7 +255,7 @@ podman volume inspect onedrive_conf
```
Or you can map your own config folder to the config volume. Make sure to copy all files from the volume into your mapped folder first.
The detailed document for the config can be found here: [Configuration](https://github.com/abraunegg/onedrive/blob/master/docs/USAGE.md#configuration)
The detailed document for the config can be found here: [Configuration](https://github.com/abraunegg/onedrive/blob/master/docs/usage.md#configuration)
### Syncing multiple accounts
There are many ways to do this, the easiest is probably to do the following:
@ -291,9 +291,10 @@ podman run -it --name onedrive_work --user "${ONEDRIVE_UID}:${ONEDRIVE_GID}" \
| <B>ONEDRIVE_LOGOUT</B> | Controls "--logout" switch. Default is 0 | 1 |
| <B>ONEDRIVE_REAUTH</B> | Controls "--reauth" switch. Default is 0 | 1 |
| <B>ONEDRIVE_AUTHFILES</B> | Controls "--auth-files" option. Default is "" | "authUrl:responseUrl" |
| <B>ONEDRIVE_AUTHRESPONSE</B> | Controls "--auth-response" option. Default is "" | See [here](https://github.com/abraunegg/onedrive/blob/master/docs/USAGE.md#authorize-the-application-with-your-onedrive-account) |
| <B>ONEDRIVE_AUTHRESPONSE</B> | Controls "--auth-response" option. Default is "" | See [here](https://github.com/abraunegg/onedrive/blob/master/docs/usage.md#authorize-the-application-with-your-onedrive-account) |
| <B>ONEDRIVE_DISPLAY_CONFIG</B> | Controls "--display-running-config" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_SINGLE_DIRECTORY</B> | Controls "--single-directory" option. Default = "" | "mydir" |
| <B>ONEDRIVE_DRYRUN</B> | Controls "--dry-run" option. Default is 0 | 1 |
### Environment Variables Usage Examples
**Verbose Output:**

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,40 @@
# How to configure OneDrive Business Shared Folder Sync
## Application Version
Before reading this document, please ensure you are running application version [![Version](https://img.shields.io/github/v/release/abraunegg/onedrive)](https://github.com/abraunegg/onedrive/releases) or greater. Use `onedrive --version` to determine what application version you are using and upgrade your client if required.
## Important Note
This feature has been 100% re-written from v2.5.0 onwards. A pre-requesite before using this capability in v2.5.0 and above is for you to revert any Shared Business Folder configuration you may be currently using, including, but not limited to:
* Removing `sync_business_shared_folders = "true|false"` from your 'config' file
* Removing the 'business_shared_folders' file
* Removing any local data | shared folder data from your configured 'sync_dir' to ensure that there are no conflicts or issues.
## Process Overview
Syncing OneDrive Business Shared Folders requires additional configuration for your 'onedrive' client:
1. From the OneDrive web interface, review the 'Shared' objects that have been shared with you.
2. Select the applicable folder, and click the 'Add shortcut to My files', which will then add this to your 'My files' folder
3. Update your OneDrive Client for Linux 'config' file to enable the feature by adding `sync_business_shared_items = "true"`. Adding this option will trigger a `--resync` requirement.
4. Test the configuration using '--dry-run'
5. Remove the use of '--dry-run' and sync the OneDrive Business Shared folders as required
**NOTE:** This documentation will be updated as this feature progresses.
### Enable syncing of OneDrive Business Shared Folders via config file
```text
sync_business_shared_items = "true"
```
### Disable syncing of OneDrive Business Shared Folders via config file
```text
sync_business_shared_items = "false"
```
## Known Issues
Shared folders, shared with you from people outside of your 'organisation' are unable to be synced. This is due to the Microsoft Graph API not presenting these folders.
Shared folders that match this scenario, when you view 'Shared' via OneDrive online, will have a 'world' symbol as per below:
![shared_with_me](./images/shared_with_me.JPG)
This issue is being tracked by: [#966](https://github.com/abraunegg/onedrive/issues/966)

View file

@ -1,54 +1,60 @@
# Known Issues
The below are known issues with this client:
# List of Identified Known Issues
The following points detail known issues associated with this client:
## Moving files into different folders should not cause data to delete and be re-uploaded
**Issue Tracker:** [#876](https://github.com/abraunegg/onedrive/issues/876)
## Renaming or Moving Files in Standalone Mode causes online deletion and re-upload to occur
**Issue Tracker:** [#876](https://github.com/abraunegg/onedrive/issues/876), [#2579](https://github.com/abraunegg/onedrive/issues/2579)
**Description:**
**Summary:**
When running the client in standalone mode (`--synchronize`) moving folders that are successfully synced around between subsequent standalone syncs causes a deletion & re-upload of data to occur.
Renaming or moving files and/or folders while using the standalone sync option `--sync` this results in unnecessary data deletion online and subsequent re-upload.
**Explanation:**
**Detailed Description:**
Technically, the client is 'working' correctly, as, when moving files, you are 'deleting' them from the current location, but copying them to the 'new location'. As the client is running in standalone sync mode, there is no way to track what OS operations have been done when the client is not running - thus, this is why the 'delete and upload' is occurring.
In standalone mode (`--sync`), the renaming or moving folders locally that have already been synchronized leads to the data being deleted online and then re-uploaded in the next synchronization process.
**Workaround:**
**Technical Explanation:**
If the tracking of moving data to new local directories is requried, it is better to run the client in service mode (`--monitor`) rather than in standalone mode, as the 'move' of files can then be handled at the point when it occurs, so that the data is moved to the new location on OneDrive without the need to be deleted and re-uploaded.
This behavior is expected from the client under these specific conditions. Renaming or moving files is interpreted as deleting them from their original location and creating them in a new location. In standalone sync mode, the client lacks the capability to track file system changes (including renames and moves) that occur when it is not running. This limitation is the root cause of the observed 'deletion and re-upload' cycle.
**Recommended Workaround:**
For effective tracking of file and folder renames or moves to new local directories, it is recommended to run the client in service mode (`--monitor`) rather than in standalone mode. This approach allows the client to immediately process these changes, enabling the data to be updated (renamed or moved) in the new location on OneDrive without undergoing deletion and re-upload.
## Application 'stops' running without any visible reason
**Issue Tracker:** [#494](https://github.com/abraunegg/onedrive/issues/494), [#753](https://github.com/abraunegg/onedrive/issues/753), [#792](https://github.com/abraunegg/onedrive/issues/792), [#884](https://github.com/abraunegg/onedrive/issues/884), [#1162](https://github.com/abraunegg/onedrive/issues/1162), [#1408](https://github.com/abraunegg/onedrive/issues/1408), [#1520](https://github.com/abraunegg/onedrive/issues/1520), [#1526](https://github.com/abraunegg/onedrive/issues/1526)
**Description:**
**Summary:**
When running the client and performing an upload or download operation, the application just stops working without any reason or explanation. If `echo $?` is used after the application has exited without visible reason, an error level of 141 may be provided.
Users experience sudden shutdowns in a client application during file transfers with Microsoft's Europe Data Centers, likely due to unstable internet or HTTPS inspection issues. This problem, often signaled by an error code of 141, is related to the application's reliance on Curl and OpenSSL. Resolution steps include system updates, seeking support from OS vendors, ISPs, OpenSSL/Curl teams, and providing detailed debug logs to Microsoft for analysis.
Additionally, this issue has mainly been seen when the client is operating against Microsoft's Europe Data Centre's.
**Detailed Description:**
**Explanation:**
The application unexpectedly stops functioning during upload or download operations when using the client. This issue occurs without any apparent reason. Running `echo $?` after the unexpected exit may return an error code of 141.
The client is heavily dependant on Curl and OpenSSL to perform the activities with the Microsoft OneDrive service. Generally, when this issue occurs, the following is found in the HTTPS Debug Log:
This problem predominantly arises when the client interacts with Microsoft's Europe Data Centers.
**Technical Explanation:**
The client heavily relies on Curl and OpenSSL for operations with the Microsoft OneDrive service. A common observation during this error is an entry in the HTTPS Debug Log stating:
```
OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104
```
The only way to determine that this is the cause of the application ceasing to work is to generate a HTTPS debug log using the following additional flags:
To confirm this as the root cause, a detailed HTTPS debug log can be generated with these commands:
```
--verbose --verbose --debug-https
```
This is indicative of the following:
* Some sort of flaky Internet connection somewhere between you and the OneDrive service
* Some sort of 'broken' HTTPS transparent inspection service inspecting your traffic somewhere between you and the OneDrive service
This error typically suggests one of the following issues:
* An unstable internet connection between the user and the OneDrive service.
* An issue with HTTPS transparent inspection services that monitor the traffic en route to the OneDrive service.
**How to resolve:**
**Recommended Resolution:**
The best avenue of action here are:
* Ensure your OS is as up-to-date as possible
* Get support from your OS vendor
* Speak to your ISP or Help Desk for assistance
* Open a ticket with OpenSSL and/or Curl teams to better handle this sort of connection failure
* Generate a HTTPS Debug Log for this application and open a new support request with Microsoft and provide the debug log file for their analysis.
Recommended steps to address this issue include:
* Updating your operating system to the latest version.
* Seeking assistance from your OS vendor.
* Contacting your Internet Service Provider (ISP) or your IT Help Desk.
* Reporting the issue to the OpenSSL and/or Curl teams for improved handling of such connection failures.
* Creating a HTTPS Debug Log during the issue and submitting a support request to Microsoft with the log for their analysis.
If you wish to diagnose this issue further, refer to the following:
https://maulwuff.de/research/ssl-debugging.html
For more in-depth SSL troubleshooting, please read: https://maulwuff.de/research/ssl-debugging.html

View file

@ -141,6 +141,7 @@ If required, review the table below based on your 'lsb_release' information to p
| Debian 10 | You must build from source or upgrade your Operating System to Debian 12 |
| Debian 11 | Use [Debian 11](#distribution-debian-11) instructions below |
| Debian 12 | Use [Debian 12](#distribution-debian-12) instructions below |
| Debian Sid | Refer to https://packages.debian.org/sid/onedrive for assistance |
| Raspbian GNU/Linux 10 | You must build from source or upgrade your Operating System to Raspbian GNU/Linux 12 |
| Raspbian GNU/Linux 11 | Use [Debian 11](#distribution-debian-11) instructions below |
| Raspbian GNU/Linux 12 | Use [Debian 12](#distribution-debian-12) instructions below |
@ -153,6 +154,11 @@ If required, review the table below based on your 'lsb_release' information to p
| Ubuntu 23.04 / Lunar | Use [Ubuntu 23.04](#distribution-ubuntu-2304) instructions below |
| Ubuntu 23.10 / Mantic | Use [Ubuntu 23.10](#distribution-ubuntu-2310) instructions below |
**Note:** If your Linux distribution and release is not in the table above, you have 2 options:
1. Compile the application from source. Refer to install.md (Compilation & Installation) for assistance.
2. Raise a support case with your Linux Distribution to provide you with an applicable package you can use.
## Distribution Package Install Instructions
### Distribution: Debian 11

View file

@ -170,11 +170,6 @@ Do not delete local file 'deletes' from OneDrive when using \fB\-\-upload\-only\
.br
Configuration file key: \fBno_remote_delete\fP (default: \fBfalse\fP)
.TP
\fB\-\-operation\-timeout\fP ARG
Set the maximum amount of time (seconds) a file operation is allowed to take. This includes DNS resolution, connecting, data transfer, etc.
.br
Configuration file key: \fBoperation_timeout\fP (default: \fB3600\fP)
.TP
\fB\-\-print\-token\fP
Print the access token, useful for debugging
.TP

400
src/clientSideFiltering.d Normal file
View file

@ -0,0 +1,400 @@
// What is this module called?
module clientSideFiltering;
// What does this module require to function?
import std.algorithm;
import std.array;
import std.file;
import std.path;
import std.regex;
import std.stdio;
import std.string;
import std.conv;
// What other modules that we have created do we need to import?
import config;
import util;
import log;
class ClientSideFiltering {
// Class variables
ApplicationConfig appConfig;
string[] paths;
string[] businessSharedItemsList;
Regex!char fileMask;
Regex!char directoryMask;
bool skipDirStrictMatch = false;
bool skipDotfiles = false;
this(ApplicationConfig appConfig) {
// Configure the class varaible to consume the application configuration
this.appConfig = appConfig;
}
// Initialise the required items
bool initialise() {
// Log what is being done
addLogEntry("Configuring Client Side Filtering (Selective Sync)", ["debug"]);
// Load the sync_list file if it exists
if (exists(appConfig.syncListFilePath)){
loadSyncList(appConfig.syncListFilePath);
}
// Load the Business Shared Items file if it exists
if (exists(appConfig.businessSharedItemsFilePath)){
loadBusinessSharedItems(appConfig.businessSharedItemsFilePath);
}
// Configure skip_dir, skip_file, skip-dir-strict-match & skip_dotfiles from config entries
// Handle skip_dir configuration in config file
addLogEntry("Configuring skip_dir ...", ["debug"]);
addLogEntry("skip_dir: " ~ to!string(appConfig.getValueString("skip_dir")), ["debug"]);
setDirMask(appConfig.getValueString("skip_dir"));
// Was --skip-dir-strict-match configured?
addLogEntry("Configuring skip_dir_strict_match ...", ["debug"]);
addLogEntry("skip_dir_strict_match: " ~ to!string(appConfig.getValueBool("skip_dir_strict_match")), ["debug"]);
if (appConfig.getValueBool("skip_dir_strict_match")) {
setSkipDirStrictMatch();
}
// Was --skip-dot-files configured?
addLogEntry("Configuring skip_dotfiles ...", ["debug"]);
addLogEntry("skip_dotfiles: " ~ to!string(appConfig.getValueBool("skip_dotfiles")), ["debug"]);
if (appConfig.getValueBool("skip_dotfiles")) {
setSkipDotfiles();
}
// Handle skip_file configuration in config file
addLogEntry("Configuring skip_file ...", ["debug"]);
// Validate skip_file to ensure that this does not contain an invalid configuration
// Do not use a skip_file entry of .* as this will prevent correct searching of local changes to process.
foreach(entry; appConfig.getValueString("skip_file").split("|")){
if (entry == ".*") {
// invalid entry element detected
addLogEntry("ERROR: Invalid skip_file entry '.*' detected");
return false;
}
}
// All skip_file entries are valid
addLogEntry("skip_file: " ~ appConfig.getValueString("skip_file"), ["debug"]);
setFileMask(appConfig.getValueString("skip_file"));
// All configured OK
return true;
}
// Shutdown components
void shutdown() {
object.destroy(appConfig);
object.destroy(paths);
object.destroy(businessSharedItemsList);
object.destroy(fileMask);
object.destroy(directoryMask);
}
// Load sync_list file if it exists
void loadSyncList(string filepath) {
// open file as read only
auto file = File(filepath, "r");
auto range = file.byLine();
foreach (line; range) {
// Skip comments in file
if (line.length == 0 || line[0] == ';' || line[0] == '#') continue;
paths ~= buildNormalizedPath(line);
}
file.close();
}
// load business_shared_folders file
void loadBusinessSharedItems(string filepath) {
// open file as read only
auto file = File(filepath, "r");
auto range = file.byLine();
foreach (line; range) {
// Skip comments in file
if (line.length == 0 || line[0] == ';' || line[0] == '#') continue;
businessSharedItemsList ~= buildNormalizedPath(line);
}
file.close();
}
// Configure the regex that will be used for 'skip_file'
void setFileMask(const(char)[] mask) {
fileMask = wild2regex(mask);
addLogEntry("Selective Sync File Mask: " ~ to!string(fileMask), ["debug"]);
}
// Configure the regex that will be used for 'skip_dir'
void setDirMask(const(char)[] dirmask) {
directoryMask = wild2regex(dirmask);
addLogEntry("Selective Sync Directory Mask: " ~ to!string(directoryMask), ["debug"]);
}
// Configure skipDirStrictMatch if function is called
// By default, skipDirStrictMatch = false;
void setSkipDirStrictMatch() {
skipDirStrictMatch = true;
}
// Configure skipDotfiles if function is called
// By default, skipDotfiles = false;
void setSkipDotfiles() {
skipDotfiles = true;
}
// return value of skipDotfiles
bool getSkipDotfiles() {
return skipDotfiles;
}
// Match against sync_list only
bool isPathExcludedViaSyncList(string path) {
// Debug output that we are performing a 'sync_list' inclusion / exclusion test
return isPathExcluded(path, paths);
}
// config file skip_dir parameter
bool isDirNameExcluded(string name) {
// Does the directory name match skip_dir config entry?
// Returns true if the name matches a skip_dir config entry
// Returns false if no match
addLogEntry("skip_dir evaluation for: " ~ name, ["debug"]);
// Try full path match first
if (!name.matchFirst(directoryMask).empty) {
addLogEntry("'!name.matchFirst(directoryMask).empty' returned true = matched", ["debug"]);
return true;
} else {
// Do we check the base name as well?
if (!skipDirStrictMatch) {
addLogEntry("No Strict Matching Enforced", ["debug"]);
// Test the entire path working backwards from child
string path = buildNormalizedPath(name);
string checkPath;
auto paths = pathSplitter(path);
foreach_reverse(directory; paths) {
if (directory != "/") {
// This will add a leading '/' but that needs to be stripped to check
checkPath = "/" ~ directory ~ checkPath;
if(!checkPath.strip('/').matchFirst(directoryMask).empty) {
addLogEntry("'!checkPath.matchFirst(directoryMask).empty' returned true = matched", ["debug"]);
return true;
}
}
}
} else {
// No match
addLogEntry("Strict Matching Enforced - No Match", ["debug"]);
}
}
// no match
return false;
}
// config file skip_file parameter
bool isFileNameExcluded(string name) {
// Does the file name match skip_file config entry?
// Returns true if the name matches a skip_file config entry
// Returns false if no match
addLogEntry("skip_file evaluation for: " ~ name, ["debug"]);
// Try full path match first
if (!name.matchFirst(fileMask).empty) {
return true;
} else {
// check just the file name
string filename = baseName(name);
if(!filename.matchFirst(fileMask).empty) {
return true;
}
}
// no match
return false;
}
// test if the given path is not included in the allowed paths
// if there are no allowed paths always return false
private bool isPathExcluded(string path, string[] allowedPaths) {
// function variables
bool exclude = false;
bool exludeDirectMatch = false; // will get updated to true, if there is a pattern match to sync_list entry
bool excludeMatched = false; // will get updated to true, if there is a pattern match to sync_list entry
bool finalResult = true; // will get updated to false, if pattern match to sync_list entry
int offset;
string wildcard = "*";
// always allow the root
if (path == ".") return false;
// if there are no allowed paths always return false
if (allowedPaths.empty) return false;
path = buildNormalizedPath(path);
addLogEntry("Evaluation against 'sync_list' for this path: " ~ path, ["debug"]);
addLogEntry("[S]exclude = " ~ to!string(exclude), ["debug"]);
addLogEntry("[S]exludeDirectMatch = " ~ to!string(exludeDirectMatch), ["debug"]);
addLogEntry("[S]excludeMatched = " ~ to!string(excludeMatched), ["debug"]);
// unless path is an exact match, entire sync_list entries need to be processed to ensure
// negative matches are also correctly detected
foreach (allowedPath; allowedPaths) {
// is this an inclusion path or finer grained exclusion?
switch (allowedPath[0]) {
case '-':
// sync_list path starts with '-', this user wants to exclude this path
exclude = true;
// If the sync_list entry starts with '-/' offset needs to be 2, else 1
if (startsWith(allowedPath, "-/")){
// Offset needs to be 2
offset = 2;
} else {
// Offset needs to be 1
offset = 1;
}
break;
case '!':
// sync_list path starts with '!', this user wants to exclude this path
exclude = true;
// If the sync_list entry starts with '!/' offset needs to be 2, else 1
if (startsWith(allowedPath, "!/")){
// Offset needs to be 2
offset = 2;
} else {
// Offset needs to be 1
offset = 1;
}
break;
case '/':
// sync_list path starts with '/', this user wants to include this path
// but a '/' at the start causes matching issues, so use the offset for comparison
exclude = false;
offset = 1;
break;
default:
// no negative pattern, default is to not exclude
exclude = false;
offset = 0;
}
// What are we comparing against?
addLogEntry("Evaluation against 'sync_list' entry: " ~ allowedPath, ["debug"]);
// Generate the common prefix from the path vs the allowed path
auto comm = commonPrefix(path, allowedPath[offset..$]);
// Is path is an exact match of the allowed path?
if (comm.length == path.length) {
// we have a potential exact match
// strip any potential '/*' from the allowed path, to avoid a potential lesser common match
string strippedAllowedPath = strip(allowedPath[offset..$], "/*");
if (path == strippedAllowedPath) {
// we have an exact path match
addLogEntry("Exact path match with 'sync_list' entry", ["debug"]);
if (!exclude) {
addLogEntry("Evaluation against 'sync_list' result: direct match", ["debug"]);
finalResult = false;
// direct match, break and go sync
break;
} else {
addLogEntry("Evaluation against 'sync_list' result: direct match - path to be excluded", ["debug"]);
// do not set excludeMatched = true here, otherwise parental path also gets excluded
// flag exludeDirectMatch so that a 'wildcard match' will not override this exclude
exludeDirectMatch = true;
// final result
finalResult = true;
}
} else {
// no exact path match, but something common does match
addLogEntry("Something 'common' matches the 'sync_list' input path", ["debug"]);
auto splitAllowedPaths = pathSplitter(strippedAllowedPath);
string pathToEvaluate = "";
foreach(base; splitAllowedPaths) {
pathToEvaluate ~= base;
if (path == pathToEvaluate) {
// The input path matches what we want to evaluate against as a direct match
if (!exclude) {
addLogEntry("Evaluation against 'sync_list' result: direct match for parental path item", ["debug"]);
finalResult = false;
// direct match, break and go sync
break;
} else {
addLogEntry("Evaluation against 'sync_list' result: direct match for parental path item but to be excluded", ["debug"]);
finalResult = true;
// do not set excludeMatched = true here, otherwise parental path also gets excluded
}
}
pathToEvaluate ~= dirSeparator;
}
}
}
// Is path is a subitem/sub-folder of the allowed path?
if (comm.length == allowedPath[offset..$].length) {
// The given path is potentially a subitem of an allowed path
// We want to capture sub-folders / files of allowed paths here, but not explicitly match other items
// if there is no wildcard
auto subItemPathCheck = allowedPath[offset..$] ~ "/";
if (canFind(path, subItemPathCheck)) {
// The 'path' includes the allowed path, and is 'most likely' a sub-path item
if (!exclude) {
addLogEntry("Evaluation against 'sync_list' result: parental path match", ["debug"]);
finalResult = false;
// parental path matches, break and go sync
break;
} else {
addLogEntry("Evaluation against 'sync_list' result: parental path match but must be excluded", ["debug"]);
finalResult = true;
excludeMatched = true;
}
}
}
// Does the allowed path contain a wildcard? (*)
if (canFind(allowedPath[offset..$], wildcard)) {
// allowed path contains a wildcard
// manually replace '*' for '.*' to be compatible with regex
string regexCompatiblePath = replace(allowedPath[offset..$], "*", ".*");
auto allowedMask = regex(regexCompatiblePath);
if (matchAll(path, allowedMask)) {
// regex wildcard evaluation matches
// if we have a prior pattern match for an exclude, excludeMatched = true
if (!exclude && !excludeMatched && !exludeDirectMatch) {
// nothing triggered an exclusion before evaluation against wildcard match attempt
addLogEntry("Evaluation against 'sync_list' result: wildcard pattern match", ["debug"]);
finalResult = false;
} else {
addLogEntry("Evaluation against 'sync_list' result: wildcard pattern matched but must be excluded", ["debug"]);
finalResult = true;
excludeMatched = true;
}
}
}
}
// Interim results
addLogEntry("[F]exclude = " ~ to!string(exclude), ["debug"]);
addLogEntry("[F]exludeDirectMatch = " ~ to!string(exludeDirectMatch), ["debug"]);
addLogEntry("[F]excludeMatched = " ~ to!string(excludeMatched), ["debug"]);
// If exclude or excludeMatched is true, then finalResult has to be true
if ((exclude) || (excludeMatched) || (exludeDirectMatch)) {
finalResult = true;
}
// results
if (finalResult) {
addLogEntry("Evaluation against 'sync_list' final result: EXCLUDED", ["debug"]);
} else {
addLogEntry("Evaluation against 'sync_list' final result: included for sync", ["debug"]);
}
return finalResult;
}
}

File diff suppressed because it is too large Load diff

110
src/curlEngine.d Normal file
View file

@ -0,0 +1,110 @@
// What is this module called?
module curlEngine;
// What does this module require to function?
import std.net.curl;
import etc.c.curl: CurlOption;
import std.datetime;
import std.conv;
import std.stdio;
// What other modules that we have created do we need to import?
import log;
class CurlEngine {
HTTP http;
bool keepAlive;
ulong dnsTimeout;
this() {
http = HTTP();
}
void initialise(ulong dnsTimeout, ulong connectTimeout, ulong dataTimeout, ulong operationTimeout, int maxRedirects, bool httpsDebug, string userAgent, bool httpProtocol, ulong userRateLimit, ulong protocolVersion, bool keepAlive=false) {
// Setting this to false ensures that when we close the curl instance, any open sockets are closed - which we need to do when running
// multiple threads and API instances at the same time otherwise we run out of local files | sockets pretty quickly
this.keepAlive = keepAlive;
this.dnsTimeout = dnsTimeout;
// Curl Timeout Handling
// libcurl dns_cache_timeout timeout
// https://curl.se/libcurl/c/CURLOPT_DNS_CACHE_TIMEOUT.html
// https://dlang.org/library/std/net/curl/http.dns_timeout.html
http.dnsTimeout = (dur!"seconds"(dnsTimeout));
// Timeout for HTTPS connections
// https://curl.se/libcurl/c/CURLOPT_CONNECTTIMEOUT.html
// https://dlang.org/library/std/net/curl/http.connect_timeout.html
http.connectTimeout = (dur!"seconds"(connectTimeout));
// Timeout for activity on connection
// This is a DMD | DLANG specific item, not a libcurl item
// https://dlang.org/library/std/net/curl/http.data_timeout.html
// https://raw.githubusercontent.com/dlang/phobos/master/std/net/curl.d - private enum _defaultDataTimeout = dur!"minutes"(2);
http.dataTimeout = (dur!"seconds"(dataTimeout));
// Maximum time any operation is allowed to take
// This includes dns resolution, connecting, data transfer, etc.
// https://curl.se/libcurl/c/CURLOPT_TIMEOUT_MS.html
// https://dlang.org/library/std/net/curl/http.operation_timeout.html
http.operationTimeout = (dur!"seconds"(operationTimeout));
// Specify how many redirects should be allowed
http.maxRedirects(maxRedirects);
// Debug HTTPS
http.verbose = httpsDebug;
// Use the configured 'user_agent' value
http.setUserAgent = userAgent;
// What IP protocol version should be used when using Curl - IPv4 & IPv6, IPv4 or IPv6
http.handle.set(CurlOption.ipresolve,protocolVersion); // 0 = IPv4 + IPv6, 1 = IPv4 Only, 2 = IPv6 Only
// What version of HTTP protocol do we use?
// Curl >= 7.62.0 defaults to http2 for a significant number of operations
if (httpProtocol) {
// Downgrade to HTTP 1.1 - yes version = 2 is HTTP 1.1
http.handle.set(CurlOption.http_version,2);
}
// Configure upload / download rate limits if configured
// 131072 = 128 KB/s - minimum for basic application operations to prevent timeouts
// A 0 value means rate is unlimited, and is the curl default
if (userRateLimit > 0) {
// set rate limit
http.handle.set(CurlOption.max_send_speed_large,userRateLimit);
http.handle.set(CurlOption.max_recv_speed_large,userRateLimit);
}
// Explicitly set these libcurl options
// https://curl.se/libcurl/c/CURLOPT_NOSIGNAL.html
// Ensure that nosignal is set to 0 - Setting CURLOPT_NOSIGNAL to 0 makes libcurl ask the system to ignore SIGPIPE signals
http.handle.set(CurlOption.nosignal,0);
// https://curl.se/libcurl/c/CURLOPT_TCP_NODELAY.html
// Ensure that TCP_NODELAY is set to 0 to ensure that TCP NAGLE is enabled
http.handle.set(CurlOption.tcp_nodelay,0);
if (httpsDebug) {
// Output what options we are using so that in the debug log this can be tracked
addLogEntry("http.dnsTimeout = " ~ to!string(dnsTimeout), ["debug"]);
addLogEntry("http.connectTimeout = " ~ to!string(connectTimeout), ["debug"]);
addLogEntry("http.dataTimeout = " ~ to!string(dataTimeout), ["debug"]);
addLogEntry("http.operationTimeout = " ~ to!string(operationTimeout), ["debug"]);
addLogEntry("http.maxRedirects = " ~ to!string(maxRedirects), ["debug"]);
addLogEntry("http.CurlOption.ipresolve = " ~ to!string(protocolVersion), ["debug"]);
addLogEntry("http.header.Connection.keepAlive = " ~ to!string(keepAlive), ["debug"]);
}
}
void connect(HTTP.Method method, const(char)[] url) {
if (!keepAlive)
http.addRequestHeader("Connection", "close");
http.method = method;
http.url = url;
}
void setDisableSSLVerifyPeer() {
addLogEntry("CAUTION: Switching off CurlOption.ssl_verifypeer ... this makes the application insecure.", ["debug"]);
http.handle.set(CurlOption.ssl_verifypeer, 0);
}
}

View file

@ -1,3 +1,7 @@
// What is this module called?
module itemdb;
// What does this module require to function?
import std.datetime;
import std.exception;
import std.path;
@ -5,19 +9,26 @@ import std.string;
import std.stdio;
import std.algorithm.searching;
import core.stdc.stdlib;
import std.json;
import std.conv;
// What other modules that we have created do we need to import?
import sqlite;
static import log;
import util;
import log;
enum ItemType {
file,
dir,
remote
remote,
unknown
}
struct Item {
string driveId;
string id;
string name;
string remoteName;
ItemType type;
string eTag;
string cTag;
@ -28,23 +39,144 @@ struct Item {
string remoteDriveId;
string remoteId;
string syncStatus;
string size;
}
final class ItemDatabase
{
// Construct an Item struct from a JSON driveItem
Item makeDatabaseItem(JSONValue driveItem) {
Item item = {
id: driveItem["id"].str,
name: "name" in driveItem ? driveItem["name"].str : null, // name may be missing for deleted files in OneDrive Business
eTag: "eTag" in driveItem ? driveItem["eTag"].str : null, // eTag is not returned for the root in OneDrive Business
cTag: "cTag" in driveItem ? driveItem["cTag"].str : null, // cTag is missing in old files (and all folders in OneDrive Business)
remoteName: "actualOnlineName" in driveItem ? driveItem["actualOnlineName"].str : null, // actualOnlineName is only used with OneDrive Business Shared Folders
};
// OneDrive API Change: https://github.com/OneDrive/onedrive-api-docs/issues/834
// OneDrive no longer returns lastModifiedDateTime if the item is deleted by OneDrive
if(isItemDeleted(driveItem)) {
// Set mtime to SysTime(0)
item.mtime = SysTime(0);
} else {
// Item is not in a deleted state
// Resolve 'Key not found: fileSystemInfo' when then item is a remote item
// https://github.com/abraunegg/onedrive/issues/11
if (isItemRemote(driveItem)) {
// remoteItem is a OneDrive object that exists on a 'different' OneDrive drive id, when compared to account default
// Normally, the 'remoteItem' field will contain 'fileSystemInfo' however, if the user uses the 'Add Shortcut ..' option in OneDrive WebUI
// to create a 'link', this object, whilst remote, does not have 'fileSystemInfo' in the expected place, thus leading to a application crash
// See: https://github.com/abraunegg/onedrive/issues/1533
if ("fileSystemInfo" in driveItem["remoteItem"]) {
// 'fileSystemInfo' is in 'remoteItem' which will be the majority of cases
item.mtime = SysTime.fromISOExtString(driveItem["remoteItem"]["fileSystemInfo"]["lastModifiedDateTime"].str);
} else {
// is a remote item, but 'fileSystemInfo' is missing from 'remoteItem'
if ("fileSystemInfo" in driveItem) {
item.mtime = SysTime.fromISOExtString(driveItem["fileSystemInfo"]["lastModifiedDateTime"].str);
}
}
} else {
// Does fileSystemInfo exist at all ?
if ("fileSystemInfo" in driveItem) {
item.mtime = SysTime.fromISOExtString(driveItem["fileSystemInfo"]["lastModifiedDateTime"].str);
}
}
}
// Set this item object type
bool typeSet = false;
if (isItemFile(driveItem)) {
// 'file' object exists in the JSON
addLogEntry("Flagging object as a file", ["debug"]);
typeSet = true;
item.type = ItemType.file;
}
if (isItemFolder(driveItem)) {
// 'folder' object exists in the JSON
addLogEntry("Flagging object as a directory", ["debug"]);
typeSet = true;
item.type = ItemType.dir;
}
if (isItemRemote(driveItem)) {
// 'remote' object exists in the JSON
addLogEntry("Flagging object as a remote", ["debug"]);
typeSet = true;
item.type = ItemType.remote;
}
// root and remote items do not have parentReference
if (!isItemRoot(driveItem) && ("parentReference" in driveItem) != null) {
item.driveId = driveItem["parentReference"]["driveId"].str;
if (hasParentReferenceId(driveItem)) {
item.parentId = driveItem["parentReference"]["id"].str;
}
}
// extract the file hash and file size
if (isItemFile(driveItem) && ("hashes" in driveItem["file"])) {
// Get file size
if (hasFileSize(driveItem)) {
item.size = to!string(driveItem["size"].integer);
// Get quickXorHash as default
if ("quickXorHash" in driveItem["file"]["hashes"]) {
item.quickXorHash = driveItem["file"]["hashes"]["quickXorHash"].str;
} else {
addLogEntry("quickXorHash is missing from " ~ driveItem["id"].str, ["debug"]);
}
// If quickXorHash is empty ..
if (item.quickXorHash.empty) {
// Is there a sha256Hash?
if ("sha256Hash" in driveItem["file"]["hashes"]) {
item.sha256Hash = driveItem["file"]["hashes"]["sha256Hash"].str;
} else {
addLogEntry("sha256Hash is missing from " ~ driveItem["id"].str, ["debug"]);
}
}
} else {
// So that we have at least a zero value here as the API provided no 'size' data for this file item
item.size = "0";
}
}
// Is the object a remote drive item - living on another driveId ?
if (isItemRemote(driveItem)) {
item.remoteDriveId = driveItem["remoteItem"]["parentReference"]["driveId"].str;
item.remoteId = driveItem["remoteItem"]["id"].str;
}
// We have 3 different operational modes where 'item.syncStatus' is used to flag if an item is synced or not:
// - National Cloud Deployments do not support /delta as a query
// - When using --single-directory
// - When using --download-only --cleanup-local-files
//
// Thus we need to track in the database that this item is in sync
// As we are making an item, set the syncStatus to Y
// ONLY when either of the three modes above are being used, all the existing DB entries will get set to N
// so when processing /children, it can be identified what the 'deleted' difference is
item.syncStatus = "Y";
// Return the created item
return item;
}
final class ItemDatabase {
// increment this for every change in the db schema
immutable int itemDatabaseVersion = 11;
immutable int itemDatabaseVersion = 12;
Database db;
string insertItemStmt;
string updateItemStmt;
string selectItemByIdStmt;
string selectItemByRemoteIdStmt;
string selectItemByParentIdStmt;
string deleteItemByIdStmt;
bool databaseInitialised = false;
this(const(char)[] filename)
{
this(const(char)[] filename) {
db = Database(filename);
int dbVersion;
try {
@ -52,14 +184,14 @@ final class ItemDatabase
} catch (SqliteException e) {
// An error was generated - what was the error?
if (e.msg == "database is locked") {
writeln();
log.error("ERROR: onedrive application is already running - check system process list for active application instances");
log.vlog(" - Use 'sudo ps aufxw | grep onedrive' to potentially determine acive running process");
writeln();
addLogEntry();
addLogEntry("ERROR: onedrive application is already running - check system process list for active application instances");
addLogEntry(" - Use 'sudo ps aufxw | grep onedrive' to potentially determine acive running process", ["verbose"]);
addLogEntry();
} else {
writeln();
log.error("ERROR: An internal database error occurred: " ~ e.msg);
writeln();
addLogEntry();
addLogEntry("ERROR: An internal database error occurred: " ~ e.msg);
addLogEntry();
}
return;
}
@ -67,10 +199,15 @@ final class ItemDatabase
if (dbVersion == 0) {
createTable();
} else if (db.getVersion() != itemDatabaseVersion) {
log.log("The item database is incompatible, re-creating database table structures");
addLogEntry("The item database is incompatible, re-creating database table structures");
db.exec("DROP TABLE item");
createTable();
}
// What is the threadsafe value
auto threadsafeValue = db.getThreadsafeValue();
addLogEntry("Threadsafe database value: " ~ to!string(threadsafeValue), ["debug"]);
// Set the enforcement of foreign key constraints.
// https://www.sqlite.org/pragma.html#pragma_foreign_keys
// PRAGMA foreign_keys = boolean;
@ -99,12 +236,12 @@ final class ItemDatabase
db.exec("PRAGMA locking_mode = EXCLUSIVE");
insertItemStmt = "
INSERT OR REPLACE INTO item (driveId, id, name, type, eTag, cTag, mtime, parentId, quickXorHash, sha256Hash, remoteDriveId, remoteId, syncStatus)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12, ?13)
INSERT OR REPLACE INTO item (driveId, id, name, remoteName, type, eTag, cTag, mtime, parentId, quickXorHash, sha256Hash, remoteDriveId, remoteId, syncStatus, size)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12, ?13, ?14, ?15)
";
updateItemStmt = "
UPDATE item
SET name = ?3, type = ?4, eTag = ?5, cTag = ?6, mtime = ?7, parentId = ?8, quickXorHash = ?9, sha256Hash = ?10, remoteDriveId = ?11, remoteId = ?12, syncStatus = ?13
SET name = ?3, remoteName = ?4, type = ?5, eTag = ?6, cTag = ?7, mtime = ?8, parentId = ?9, quickXorHash = ?10, sha256Hash = ?11, remoteDriveId = ?12, remoteId = ?13, syncStatus = ?14, size = ?15
WHERE driveId = ?1 AND id = ?2
";
selectItemByIdStmt = "
@ -112,6 +249,11 @@ final class ItemDatabase
FROM item
WHERE driveId = ?1 AND id = ?2
";
selectItemByRemoteIdStmt = "
SELECT *
FROM item
WHERE remoteDriveId = ?1 AND remoteId = ?2
";
selectItemByParentIdStmt = "SELECT * FROM item WHERE driveId = ? AND parentId = ?";
deleteItemByIdStmt = "DELETE FROM item WHERE driveId = ? AND id = ?";
@ -119,17 +261,16 @@ final class ItemDatabase
databaseInitialised = true;
}
bool isDatabaseInitialised()
{
bool isDatabaseInitialised() {
return databaseInitialised;
}
void createTable()
{
void createTable() {
db.exec("CREATE TABLE item (
driveId TEXT NOT NULL,
id TEXT NOT NULL,
name TEXT NOT NULL,
remoteName TEXT,
type TEXT NOT NULL,
eTag TEXT,
cTag TEXT,
@ -141,6 +282,7 @@ final class ItemDatabase
remoteId TEXT,
deltaLink TEXT,
syncStatus TEXT,
size TEXT,
PRIMARY KEY (driveId, id),
FOREIGN KEY (driveId, parentId)
REFERENCES item (driveId, id)
@ -154,32 +296,27 @@ final class ItemDatabase
db.setVersion(itemDatabaseVersion);
}
void insert(const ref Item item)
{
void insert(const ref Item item) {
auto p = db.prepare(insertItemStmt);
bindItem(item, p);
p.exec();
}
void update(const ref Item item)
{
void update(const ref Item item) {
auto p = db.prepare(updateItemStmt);
bindItem(item, p);
p.exec();
}
void dump_open_statements()
{
void dump_open_statements() {
db.dump_open_statements();
}
int db_checkpoint()
{
int db_checkpoint() {
return db.db_checkpoint();
}
void upsert(const ref Item item)
{
void upsert(const ref Item item) {
auto s = db.prepare("SELECT COUNT(*) FROM item WHERE driveId = ? AND id = ?");
s.bind(1, item.driveId);
s.bind(2, item.id);
@ -191,8 +328,7 @@ final class ItemDatabase
stmt.exec();
}
Item[] selectChildren(const(char)[] driveId, const(char)[] id)
{
Item[] selectChildren(const(char)[] driveId, const(char)[] id) {
auto p = db.prepare(selectItemByParentIdStmt);
p.bind(1, driveId);
p.bind(2, id);
@ -205,8 +341,7 @@ final class ItemDatabase
return items;
}
bool selectById(const(char)[] driveId, const(char)[] id, out Item item)
{
bool selectById(const(char)[] driveId, const(char)[] id, out Item item) {
auto p = db.prepare(selectItemByIdStmt);
p.bind(1, driveId);
p.bind(2, id);
@ -218,9 +353,20 @@ final class ItemDatabase
return false;
}
bool selectByRemoteId(const(char)[] remoteDriveId, const(char)[] remoteId, out Item item) {
auto p = db.prepare(selectItemByRemoteIdStmt);
p.bind(1, remoteDriveId);
p.bind(2, remoteId);
auto r = p.exec();
if (!r.empty) {
item = buildItem(r);
return true;
}
return false;
}
// returns true if an item id is in the database
bool idInLocalDatabase(const(string) driveId, const(string)id)
{
bool idInLocalDatabase(const(string) driveId, const(string)id) {
auto p = db.prepare(selectItemByIdStmt);
p.bind(1, driveId);
p.bind(2, id);
@ -233,18 +379,11 @@ final class ItemDatabase
// returns the item with the given path
// the path is relative to the sync directory ex: "./Music/Turbo Killer.mp3"
bool selectByPath(const(char)[] path, string rootDriveId, out Item item)
{
bool selectByPath(const(char)[] path, string rootDriveId, out Item item) {
Item currItem = { driveId: rootDriveId };
// Issue https://github.com/abraunegg/onedrive/issues/578
if (startsWith(path, "./") || path == ".") {
// Need to remove the . from the path prefix
path = "root/" ~ path.chompPrefix(".");
} else {
// Leave path as it is
path = "root/" ~ path;
}
path = "root/" ~ (startsWith(path, "./") || path == "." ? path.chompPrefix(".") : path);
auto s = db.prepare("SELECT * FROM item WHERE name = ?1 AND driveId IS ?2 AND parentId IS ?3");
foreach (name; pathSplitter(path)) {
@ -254,12 +393,15 @@ final class ItemDatabase
auto r = s.exec();
if (r.empty) return false;
currItem = buildItem(r);
// if the item is of type remote substitute it with the child
// If the item is of type remote substitute it with the child
if (currItem.type == ItemType.remote) {
addLogEntry("Record is a Remote Object: " ~ to!string(currItem), ["debug"]);
Item child;
if (selectById(currItem.remoteDriveId, currItem.remoteId, child)) {
assert(child.type != ItemType.remote, "The type of the child cannot be remote");
currItem = child;
addLogEntry("Selecting Record that is NOT Remote Object: " ~ to!string(currItem), ["debug"]);
}
}
}
@ -267,19 +409,12 @@ final class ItemDatabase
return true;
}
// same as selectByPath() but it does not traverse remote folders
bool selectByPathWithoutRemote(const(char)[] path, string rootDriveId, out Item item)
{
// same as selectByPath() but it does not traverse remote folders, returns the remote element if that is what is required
bool selectByPathIncludingRemoteItems(const(char)[] path, string rootDriveId, out Item item) {
Item currItem = { driveId: rootDriveId };
// Issue https://github.com/abraunegg/onedrive/issues/578
if (startsWith(path, "./") || path == ".") {
// Need to remove the . from the path prefix
path = "root/" ~ path.chompPrefix(".");
} else {
// Leave path as it is
path = "root/" ~ path;
}
path = "root/" ~ (startsWith(path, "./") || path == "." ? path.chompPrefix(".") : path);
auto s = db.prepare("SELECT * FROM item WHERE name IS ?1 AND driveId IS ?2 AND parentId IS ?3");
foreach (name; pathSplitter(path)) {
@ -290,62 +425,89 @@ final class ItemDatabase
if (r.empty) return false;
currItem = buildItem(r);
}
if (currItem.type == ItemType.remote) {
addLogEntry("Record selected is a Remote Object: " ~ to!string(currItem), ["debug"]);
}
item = currItem;
return true;
}
void deleteById(const(char)[] driveId, const(char)[] id)
{
void deleteById(const(char)[] driveId, const(char)[] id) {
auto p = db.prepare(deleteItemByIdStmt);
p.bind(1, driveId);
p.bind(2, id);
p.exec();
}
private void bindItem(const ref Item item, ref Statement stmt)
{
private void bindItem(const ref Item item, ref Statement stmt) {
with (stmt) with (item) {
bind(1, driveId);
bind(2, id);
bind(3, name);
bind(4, remoteName);
string typeStr = null;
final switch (type) with (ItemType) {
case file: typeStr = "file"; break;
case dir: typeStr = "dir"; break;
case remote: typeStr = "remote"; break;
case unknown: typeStr = "unknown"; break;
}
bind(4, typeStr);
bind(5, eTag);
bind(6, cTag);
bind(7, mtime.toISOExtString());
bind(8, parentId);
bind(9, quickXorHash);
bind(10, sha256Hash);
bind(11, remoteDriveId);
bind(12, remoteId);
bind(13, syncStatus);
bind(5, typeStr);
bind(6, eTag);
bind(7, cTag);
bind(8, mtime.toISOExtString());
bind(9, parentId);
bind(10, quickXorHash);
bind(11, sha256Hash);
bind(12, remoteDriveId);
bind(13, remoteId);
bind(14, syncStatus);
bind(15, size);
}
}
private Item buildItem(Statement.Result result)
{
private Item buildItem(Statement.Result result) {
assert(!result.empty, "The result must not be empty");
assert(result.front.length == 14, "The result must have 14 columns");
assert(result.front.length == 16, "The result must have 16 columns");
Item item = {
// column 0: driveId
// column 1: id
// column 2: name
// column 3: remoteName - only used when there is a difference in the local name & remote shared folder name
// column 4: type
// column 5: eTag
// column 6: cTag
// column 7: mtime
// column 8: parentId
// column 9: quickXorHash
// column 10: sha256Hash
// column 11: remoteDriveId
// column 12: remoteId
// column 13: deltaLink
// column 14: syncStatus
// column 15: size
driveId: result.front[0].dup,
id: result.front[1].dup,
name: result.front[2].dup,
eTag: result.front[4].dup,
cTag: result.front[5].dup,
mtime: SysTime.fromISOExtString(result.front[6]),
parentId: result.front[7].dup,
quickXorHash: result.front[8].dup,
sha256Hash: result.front[9].dup,
remoteDriveId: result.front[10].dup,
remoteId: result.front[11].dup,
syncStatus: result.front[12].dup
remoteName: result.front[3].dup,
// Column 4 is type - not set here
eTag: result.front[5].dup,
cTag: result.front[6].dup,
mtime: SysTime.fromISOExtString(result.front[7]),
parentId: result.front[8].dup,
quickXorHash: result.front[9].dup,
sha256Hash: result.front[10].dup,
remoteDriveId: result.front[11].dup,
remoteId: result.front[12].dup,
// Column 13 is deltaLink - not set here
syncStatus: result.front[14].dup,
size: result.front[15].dup
};
switch (result.front[3]) {
switch (result.front[4]) {
case "file": item.type = ItemType.file; break;
case "dir": item.type = ItemType.dir; break;
case "remote": item.type = ItemType.remote; break;
@ -357,8 +519,7 @@ final class ItemDatabase
// computes the path of the given item id
// the path is relative to the sync directory ex: "Music/Turbo Killer.mp3"
// the trailing slash is not added even if the item is a directory
string computePath(const(char)[] driveId, const(char)[] id)
{
string computePath(const(char)[] driveId, const(char)[] id) {
assert(driveId && id);
string path;
Item item;
@ -406,9 +567,9 @@ final class ItemDatabase
}
} else {
// broken tree
log.vdebug("The following generated a broken tree query:");
log.vdebug("Drive ID: ", driveId);
log.vdebug("Item ID: ", id);
addLogEntry("The following generated a broken tree query:", ["debug"]);
addLogEntry("Drive ID: " ~ to!string(driveId), ["debug"]);
addLogEntry("Item ID: " ~ to!string(id), ["debug"]);
assert(0);
}
}
@ -416,8 +577,7 @@ final class ItemDatabase
return path;
}
Item[] selectRemoteItems()
{
Item[] selectRemoteItems() {
Item[] items;
auto stmt = db.prepare("SELECT * FROM item WHERE remoteDriveId IS NOT NULL");
auto res = stmt.exec();
@ -428,8 +588,11 @@ final class ItemDatabase
return items;
}
string getDeltaLink(const(char)[] driveId, const(char)[] id)
{
string getDeltaLink(const(char)[] driveId, const(char)[] id) {
// Log what we received
addLogEntry("DeltaLink Query (driveId): " ~ to!string(driveId), ["debug"]);
addLogEntry("DeltaLink Query (id): " ~ to!string(id), ["debug"]);
assert(driveId && id);
auto stmt = db.prepare("SELECT deltaLink FROM item WHERE driveId = ?1 AND id = ?2");
stmt.bind(1, driveId);
@ -439,8 +602,7 @@ final class ItemDatabase
return res.front[0].dup;
}
void setDeltaLink(const(char)[] driveId, const(char)[] id, const(char)[] deltaLink)
{
void setDeltaLink(const(char)[] driveId, const(char)[] id, const(char)[] deltaLink) {
assert(driveId && id);
assert(deltaLink);
auto stmt = db.prepare("UPDATE item SET deltaLink = ?3 WHERE driveId = ?1 AND id = ?2");
@ -455,8 +617,7 @@ final class ItemDatabase
// As we query /children to get all children from OneDrive, update anything in the database
// to be flagged as not-in-sync, thus, we can use that flag to determing what was previously
// in-sync, but now deleted on OneDrive
void downgradeSyncStatusFlag(const(char)[] driveId, const(char)[] id)
{
void downgradeSyncStatusFlag(const(char)[] driveId, const(char)[] id) {
assert(driveId);
auto stmt = db.prepare("UPDATE item SET syncStatus = 'N' WHERE driveId = ?1 AND id = ?2");
stmt.bind(1, driveId);
@ -466,8 +627,7 @@ final class ItemDatabase
// National Cloud Deployments (US and DE) do not support /delta as a query
// Select items that have a out-of-sync flag set
Item[] selectOutOfSyncItems(const(char)[] driveId)
{
Item[] selectOutOfSyncItems(const(char)[] driveId) {
assert(driveId);
Item[] items;
auto stmt = db.prepare("SELECT * FROM item WHERE syncStatus = 'N' AND driveId = ?1");
@ -482,8 +642,7 @@ final class ItemDatabase
// OneDrive Business Folders are stored in the database potentially without a root | parentRoot link
// Select items associated with the provided driveId
Item[] selectByDriveId(const(char)[] driveId)
{
Item[] selectByDriveId(const(char)[] driveId) {
assert(driveId);
Item[] items;
auto stmt = db.prepare("SELECT * FROM item WHERE driveId = ?1 AND parentId IS NULL");
@ -496,22 +655,37 @@ final class ItemDatabase
return items;
}
// Select all items associated with the provided driveId
Item[] selectAllItemsByDriveId(const(char)[] driveId) {
assert(driveId);
Item[] items;
auto stmt = db.prepare("SELECT * FROM item WHERE driveId = ?1");
stmt.bind(1, driveId);
auto res = stmt.exec();
while (!res.empty) {
items ~= buildItem(res);
res.step();
}
return items;
}
// Perform a vacuum on the database, commit WAL / SHM to file
void performVacuum()
{
void performVacuum() {
addLogEntry("Attempting to perform a database vacuum to merge any temporary data", ["debug"]);
try {
auto stmt = db.prepare("VACUUM;");
stmt.exec();
addLogEntry("Database vacuum is complete", ["debug"]);
} catch (SqliteException e) {
writeln();
log.error("ERROR: Unable to perform a database vacuum: " ~ e.msg);
writeln();
addLogEntry();
addLogEntry("ERROR: Unable to perform a database vacuum: " ~ e.msg);
addLogEntry();
}
}
// Select distinct driveId items from database
string[] selectDistinctDriveIds()
{
string[] selectDistinctDriveIds() {
string[] driveIdArray;
auto stmt = db.prepare("SELECT DISTINCT driveId FROM item;");
auto res = stmt.exec();
@ -522,4 +696,4 @@ final class ItemDatabase
}
return driveIdArray;
}
}
}

351
src/log.d
View file

@ -1,239 +1,156 @@
// What is this module called?
module log;
// What does this module require to function?
import std.stdio;
import std.file;
import std.datetime;
import std.process;
import std.conv;
import core.memory;
import core.sys.posix.pwd, core.sys.posix.unistd, core.stdc.string : strlen;
import std.algorithm : splitter;
import std.concurrency;
import std.typecons;
import core.sync.mutex;
import core.thread;
import std.format;
import std.string;
version(Notifications) {
import dnotify;
}
// enable verbose logging
long verbose;
bool writeLogFile = false;
bool logFileWriteFailFlag = false;
// Shared module object
shared LogBuffer logBuffer;
private bool doNotifications;
class LogBuffer {
private:
string[3][] buffer;
Mutex bufferLock;
string logFilePath;
bool writeToFile;
bool verboseLogging;
bool debugLogging;
Thread flushThread;
bool isRunning;
bool sendGUINotification;
// shared string variable for username
string username;
string logFilePath;
public:
this(bool verboseLogging, bool debugLogging) {
// Initialise the mutex
bufferLock = new Mutex();
// Initialise other items
this.logFilePath = logFilePath;
this.writeToFile = writeToFile;
this.verboseLogging = verboseLogging;
this.debugLogging = debugLogging;
this.isRunning = true;
this.sendGUINotification = true;
this.flushThread = new Thread(&flushBuffer);
flushThread.isDaemon(true);
flushThread.start();
}
void init(string logDir)
{
writeLogFile = true;
username = getUserName();
logFilePath = logDir;
if (!exists(logFilePath)){
// logfile path does not exist
try {
mkdirRecurse(logFilePath);
}
catch (std.file.FileException e) {
// we got an error ..
writeln("\nUnable to access ", logFilePath);
writeln("Please manually create '",logFilePath, "' and set appropriate permissions to allow write access");
writeln("The requested client activity log will instead be located in your users home directory");
}
}
}
~this() {
isRunning = false;
flushThread.join();
flush();
}
void setNotifications(bool value)
{
version(Notifications) {
// if we try to enable notifications, check for server availability
// and disable in case dbus server is not reachable
if (value) {
auto serverAvailable = dnotify.check_availability();
if (!serverAvailable) {
log("Notification (dbus) server not available, disabling");
value = false;
}
}
}
doNotifications = value;
}
void log(T...)(T args)
{
writeln(args);
if(writeLogFile){
// Write to log file
logfileWriteLine(args);
}
}
void logAndNotify(T...)(T args)
{
notify(args);
log(args);
}
void fileOnly(T...)(T args)
{
if(writeLogFile){
// Write to log file
logfileWriteLine(args);
}
}
void vlog(T...)(T args)
{
if (verbose >= 1) {
writeln(args);
if(writeLogFile){
// Write to log file
logfileWriteLine(args);
}
}
}
void vdebug(T...)(T args)
{
if (verbose >= 2) {
writeln("[DEBUG] ", args);
if(writeLogFile){
// Write to log file
logfileWriteLine("[DEBUG] ", args);
}
}
}
void vdebugNewLine(T...)(T args)
{
if (verbose >= 2) {
writeln("\n[DEBUG] ", args);
if(writeLogFile){
// Write to log file
logfileWriteLine("\n[DEBUG] ", args);
}
}
}
void error(T...)(T args)
{
stderr.writeln(args);
if(writeLogFile){
// Write to log file
logfileWriteLine(args);
}
}
void errorAndNotify(T...)(T args)
{
notify(args);
error(args);
}
void notify(T...)(T args)
{
version(Notifications) {
if (doNotifications) {
string result;
foreach (index, arg; args) {
result ~= to!string(arg);
if (index != args.length - 1)
result ~= " ";
}
auto n = new Notification("OneDrive", result, "IGNORED");
try {
shared void logThisMessage(string message, string[] levels = ["info"]) {
// Generate the timestamp for this log entry
auto timeStamp = leftJustify(Clock.currTime().toString(), 28, '0');
synchronized(bufferLock) {
foreach (level; levels) {
// Normal application output
if (!debugLogging) {
if ((level == "info") || ((verboseLogging) && (level == "verbose")) || (level == "logFileOnly") || (level == "consoleOnly") || (level == "consoleOnlyNoNewLine")) {
// Add this message to the buffer, with this format
buffer ~= [timeStamp, level, format("%s", message)];
}
} else {
// Debug Logging (--verbose --verbose | -v -v | -vv) output
// Add this message, regardless of 'level' to the buffer, with this format
buffer ~= [timeStamp, level, format("DEBUG: %s", message)];
// If there are multiple 'levels' configured, ignore this and break as we are doing debug logging
break;
}
// Submit the message to the dbus / notification daemon for display within the GUI being used
// Will not send GUI notifications when running in debug mode
if ((!debugLogging) && (level == "notify")) {
version(Notifications) {
if (sendGUINotification) {
notify(message);
}
}
}
}
}
}
shared void notify(string message) {
// Use dnotify's functionality for GUI notifications, if GUI notifications is enabled
version(Notifications) {
auto n = new Notification("Log Notification", message, "IGNORED");
n.show();
// Sent message to notification daemon
if (verbose >= 2) {
writeln("[DEBUG] Sent notification to notification service. If notification is not displayed, check dbus or notification-daemon for errors");
}
} catch (Throwable e) {
vlog("Got exception from showing notification: ", e);
}
}
}
}
}
private void logfileWriteLine(T...)(T args)
{
static import std.exception;
// Write to log file
string logFileName = .logFilePath ~ .username ~ ".onedrive.log";
auto currentTime = Clock.currTime();
auto timeString = currentTime.toString();
File logFile;
// Resolve: std.exception.ErrnoException@std/stdio.d(423): Cannot open file `/var/log/onedrive/xxxxx.onedrive.log' in mode `a' (Permission denied)
try {
logFile = File(logFileName, "a");
}
catch (std.exception.ErrnoException e) {
// We cannot open the log file in logFilePath location for writing
// The user is not part of the standard 'users' group (GID 100)
// Change logfile to ~/onedrive.log putting the log file in the users home directory
private void flushBuffer() {
while (isRunning) {
Thread.sleep(dur!("msecs")(200));
flush();
}
}
if (!logFileWriteFailFlag) {
// write out error message that we cant log to the requested file
writeln("\nUnable to write activity log to ", logFileName);
writeln("Please set appropriate permissions to allow write access to the logging directory for your user account");
writeln("The requested client activity log will instead be located in your users home directory\n");
// set the flag so we dont keep printing this error message
logFileWriteFailFlag = true;
}
string homePath = environment.get("HOME");
string logFileNameAlternate = homePath ~ "/onedrive.log";
logFile = File(logFileNameAlternate, "a");
}
// Write to the log file
logFile.writeln(timeString, "\t", args);
logFile.close();
private void flush() {
string[3][] messages;
synchronized(bufferLock) {
messages = buffer;
buffer.length = 0;
}
foreach (msg; messages) {
// timestamp, logLevel, message
// Always write the log line to the console, if level != logFileOnly
if (msg[1] != "logFileOnly") {
// Console output .. what sort of output
if (msg[1] == "consoleOnlyNoNewLine") {
// This is used write out a message to the console only, without a new line
// This is used in non-verbose mode to indicate something is happening when downloading JSON data from OneDrive or when we need user input from --resync
write(msg[2]);
} else {
// write this to the console with a new line
writeln(msg[2]);
}
}
// Was this just console only output?
if ((msg[1] != "consoleOnlyNoNewLine") && (msg[1] != "consoleOnly")) {
// Write to the logfile only if configured to do so - console only items should not be written out
if (writeToFile) {
string logFileLine = format("[%s] %s", msg[0], msg[2]);
std.file.append(logFilePath, logFileLine ~ "\n");
}
}
}
}
}
private string getUserName()
{
auto pw = getpwuid(getuid);
// get required details
auto runtime_pw_name = pw.pw_name[0 .. strlen(pw.pw_name)].splitter(',');
auto runtime_pw_uid = pw.pw_uid;
auto runtime_pw_gid = pw.pw_gid;
// user identifiers from process
vdebug("Process ID: ", pw);
vdebug("User UID: ", runtime_pw_uid);
vdebug("User GID: ", runtime_pw_gid);
// What should be returned as username?
if (!runtime_pw_name.empty && runtime_pw_name.front.length){
// user resolved
vdebug("User Name: ", runtime_pw_name.front.idup);
return runtime_pw_name.front.idup;
} else {
// Unknown user?
vdebug("User Name: unknown");
return "unknown";
}
// Function to initialize the logging system
void initialiseLogging(bool verboseLogging = false, bool debugLogging = false) {
logBuffer = cast(shared) new LogBuffer(verboseLogging, debugLogging);
}
void displayMemoryUsagePreGC()
{
// Display memory usage
writeln("\nMemory Usage pre GC (bytes)");
writeln("--------------------");
writeln("memory usedSize = ", GC.stats.usedSize);
writeln("memory freeSize = ", GC.stats.freeSize);
// uncomment this if required, if not using LDC 1.16 as this does not exist in that version
//writeln("memory allocatedInCurrentThread = ", GC.stats.allocatedInCurrentThread, "\n");
// Function to add a log entry with multiple levels
void addLogEntry(string message = "", string[] levels = ["info"]) {
logBuffer.logThisMessage(message, levels);
}
void displayMemoryUsagePostGC()
{
// Display memory usage
writeln("\nMemory Usage post GC (bytes)");
writeln("--------------------");
writeln("memory usedSize = ", GC.stats.usedSize);
writeln("memory freeSize = ", GC.stats.freeSize);
// uncomment this if required, if not using LDC 1.16 as this does not exist in that version
//writeln("memory allocatedInCurrentThread = ", GC.stats.allocatedInCurrentThread, "\n");
// Function to set logFilePath and enable logging to a file
void enableLogFileOutput(string configuredLogFilePath) {
logBuffer.logFilePath = configuredLogFilePath;
logBuffer.writeToFile = true;
}
void disableGUINotifications(bool userConfigDisableNotifications) {
logBuffer.sendGUINotification = userConfigDisableNotifications;
}

3057
src/main.d

File diff suppressed because it is too large Load diff

View file

@ -1,69 +1,205 @@
import core.sys.linux.sys.inotify;
import core.stdc.errno;
import core.sys.posix.poll, core.sys.posix.unistd;
import std.exception, std.file, std.path, std.regex, std.stdio, std.string, std.algorithm;
import core.stdc.stdlib;
import config;
import selective;
import util;
static import log;
// What is this module called?
module monitor;
// relevant inotify events
// What does this module require to function?
import core.stdc.errno;
import core.stdc.stdlib;
import core.sys.linux.sys.inotify;
import core.sys.posix.poll;
import core.sys.posix.unistd;
import core.sys.posix.sys.select;
import core.time;
import std.algorithm;
import std.concurrency;
import std.exception;
import std.file;
import std.path;
import std.regex;
import std.stdio;
import std.string;
import std.conv;
// What other modules that we have created do we need to import?
import config;
import util;
import log;
import clientSideFiltering;
// Relevant inotify events
private immutable uint32_t mask = IN_CLOSE_WRITE | IN_CREATE | IN_DELETE | IN_MOVE | IN_IGNORED | IN_Q_OVERFLOW;
class MonitorException: ErrnoException
{
@safe this(string msg, string file = __FILE__, size_t line = __LINE__)
{
class MonitorException: ErrnoException {
@safe this(string msg, string file = __FILE__, size_t line = __LINE__) {
super(msg, file, line);
}
}
final class Monitor
{
bool verbose;
shared class MonitorBackgroundWorker {
// inotify file descriptor
private int fd;
int fd;
private bool working;
void initialise() {
fd = inotify_init();
working = false;
if (fd < 0) throw new MonitorException("inotify_init failed");
}
// Add this path to be monitored
private int addInotifyWatch(string pathname) {
int wd = inotify_add_watch(fd, toStringz(pathname), mask);
if (wd < 0) {
if (errno() == ENOSPC) {
// Get the current value
ulong maxInotifyWatches = to!int(strip(readText("/proc/sys/fs/inotify/max_user_watches")));
addLogEntry("The user limit on the total number of inotify watches has been reached.");
addLogEntry("Your current limit of inotify watches is: " ~ to!string(maxInotifyWatches));
addLogEntry("It is recommended that you change the max number of inotify watches to at least double your existing value.");
addLogEntry("To change the current max number of watches to " ~ to!string((maxInotifyWatches * 2)) ~ " run:");
addLogEntry("EXAMPLE: sudo sysctl fs.inotify.max_user_watches=" ~ to!string((maxInotifyWatches * 2)));
}
if (errno() == 13) {
addLogEntry("WARNING: inotify_add_watch failed - permission denied: " ~ pathname, ["verbose"]);
}
// Flag any other errors
addLogEntry("ERROR: inotify_add_watch failed: " ~ pathname);
return wd;
}
// Add path to inotify watch - required regardless if a '.folder' or 'folder'
addLogEntry("inotify_add_watch successfully added for: " ~ pathname, ["debug"]);
// Do we log that we are monitoring this directory?
if (isDir(pathname)) {
// Log that this is directory is being monitored
addLogEntry("Monitoring directory: " ~ pathname, ["verbose"]);
}
return wd;
}
int remove(int wd) {
return inotify_rm_watch(fd, wd);
}
bool isWorking() {
return working;
}
void watch(Tid callerTid) {
// On failure, send -1 to caller
int res;
// wait for the caller to be ready
int isAlive = receiveOnly!int();
while (isAlive) {
fd_set fds;
FD_ZERO (&fds);
FD_SET(fd, &fds);
working = true;
res = select(FD_SETSIZE, &fds, null, null, null);
if(res == -1) {
if(errno() == EINTR) {
// Received an interrupt signal but no events are available
// try update work staus and directly watch again
receiveTimeout(dur!"seconds"(1), (int msg) {
isAlive = msg;
});
} else {
// Error occurred, tell caller to terminate.
callCaller(callerTid, -1);
working = false;
break;
}
} else {
// Wake up caller
callCaller(callerTid, 1);
// Wait for the caller to be ready
isAlive = receiveOnly!int();
}
}
}
void callCaller(Tid callerTid, int msg) {
working = false;
callerTid.send(msg);
}
void shutdown() {
if (fd > 0) {
close(fd);
fd = 0;
}
}
}
void startMonitorJob(shared(MonitorBackgroundWorker) worker, Tid callerTid)
{
try {
worker.watch(callerTid);
} catch (OwnerTerminated error) {
// caller is terminated
}
worker.shutdown();
}
final class Monitor {
// Class variables
ApplicationConfig appConfig;
ClientSideFiltering selectiveSync;
// Are we verbose in logging output
bool verbose = false;
// skip symbolic links
bool skip_symlinks = false;
// check for .nosync if enabled
bool check_nosync = false;
// check if initialised
bool initialised = false;
// Configure Private Class Variables
shared(MonitorBackgroundWorker) worker;
// map every inotify watch descriptor to its directory
private string[int] wdToDirName;
// map the inotify cookies of move_from events to their path
private string[int] cookieToPath;
// buffer to receive the inotify events
private void[] buffer;
// skip symbolic links
bool skip_symlinks;
// check for .nosync if enabled
bool check_nosync;
private SelectiveSync selectiveSync;
// Configure function delegates
void delegate(string path) onDirCreated;
void delegate(string path) onFileChanged;
void delegate(string path) onDelete;
void delegate(string from, string to) onMove;
this(SelectiveSync selectiveSync)
{
assert(selectiveSync);
// Configure the class varaible to consume the application configuration including selective sync
this(ApplicationConfig appConfig, ClientSideFiltering selectiveSync) {
this.appConfig = appConfig;
this.selectiveSync = selectiveSync;
}
void init(Config cfg, bool verbose, bool skip_symlinks, bool check_nosync)
{
this.verbose = verbose;
this.skip_symlinks = skip_symlinks;
this.check_nosync = check_nosync;
// Initialise the monitor class
void initialise() {
// Configure the variables
skip_symlinks = appConfig.getValueBool("skip_symlinks");
check_nosync = appConfig.getValueBool("check_nosync");
if (appConfig.getValueLong("verbose") > 0) {
verbose = true;
}
assert(onDirCreated && onFileChanged && onDelete && onMove);
fd = inotify_init();
if (fd < 0) throw new MonitorException("inotify_init failed");
if (!buffer) buffer = new void[4096];
worker = new shared(MonitorBackgroundWorker);
worker.initialise();
// from which point do we start watching for changes?
string monitorPath;
if (cfg.getValueString("single_directory") != ""){
// single directory in use, monitor only this
monitorPath = "./" ~ cfg.getValueString("single_directory");
if (appConfig.getValueString("single_directory") != ""){
// single directory in use, monitor only this path
monitorPath = "./" ~ appConfig.getValueString("single_directory");
} else {
// default
monitorPath = ".";
@ -71,17 +207,19 @@ final class Monitor
addRecursive(monitorPath);
}
void shutdown()
{
if (fd > 0) close(fd);
// Shutdown the monitor class
void shutdown() {
if(!initialised)
return;
worker.shutdown();
wdToDirName = null;
}
private void addRecursive(string dirname)
{
// Recursivly add this path to be monitored
private void addRecursive(string dirname) {
// skip non existing/disappeared items
if (!exists(dirname)) {
log.vlog("Not adding non-existing/disappeared directory: ", dirname);
addLogEntry("Not adding non-existing/disappeared directory: " ~ dirname, ["verbose"]);
return;
}
@ -93,7 +231,7 @@ final class Monitor
if (isDir(dirname)) {
if (selectiveSync.isDirNameExcluded(dirname.strip('.'))) {
// dont add a watch for this item
log.vdebug("Skipping monitoring due to skip_dir match: ", dirname);
addLogEntry("Skipping monitoring due to skip_dir match: " ~ dirname, ["debug"]);
return;
}
}
@ -103,14 +241,14 @@ final class Monitor
// This due to if the user has specified in skip_file an exclusive path: '/path/file' - that is what must be matched
if (selectiveSync.isFileNameExcluded(dirname.strip('.'))) {
// dont add a watch for this item
log.vdebug("Skipping monitoring due to skip_file match: ", dirname);
addLogEntry("Skipping monitoring due to skip_file match: " ~ dirname, ["debug"]);
return;
}
}
// is the path exluded by sync_list?
if (selectiveSync.isPathExcludedViaSyncList(buildNormalizedPath(dirname))) {
// dont add a watch for this item
log.vdebug("Skipping monitoring due to sync_list match: ", dirname);
addLogEntry("Skipping monitoring due to sync_list match: " ~ dirname, ["debug"]);
return;
}
}
@ -127,15 +265,27 @@ final class Monitor
// Do we need to check for .nosync? Only if check_nosync is true
if (check_nosync) {
if (exists(buildNormalizedPath(dirname) ~ "/.nosync")) {
log.vlog("Skipping watching path - .nosync found & --check-for-nosync enabled: ", buildNormalizedPath(dirname));
addLogEntry("Skipping watching path - .nosync found & --check-for-nosync enabled: " ~ buildNormalizedPath(dirname), ["verbose"]);
return;
}
}
if (isDir(dirname)) {
// This is a directory
// is the path exluded if skip_dotfiles configured and path is a .folder?
if ((selectiveSync.getSkipDotfiles()) && (isDotFile(dirname))) {
// dont add a watch for this directory
return;
}
}
// passed all potential exclusions
// add inotify watch for this path / directory / file
log.vdebug("Calling add() for this dirname: ", dirname);
add(dirname);
addLogEntry("Calling worker.addInotifyWatch() for this dirname: " ~ dirname, ["debug"]);
int wd = worker.addInotifyWatch(dirname);
if (wd > 0) {
wdToDirName[wd] = buildNormalizedPath(dirname) ~ "/";
}
// if this is a directory, recursivly add this path
if (isDir(dirname)) {
@ -144,7 +294,7 @@ final class Monitor
auto pathList = dirEntries(dirname, SpanMode.shallow, false);
foreach(DirEntry entry; pathList) {
if (entry.isDir) {
log.vdebug("Calling addRecursive() for this directory: ", entry.name);
addLogEntry("Calling addRecursive() for this directory: " ~ entry.name, ["debug"]);
addRecursive(entry.name);
}
}
@ -158,10 +308,10 @@ final class Monitor
// Need to check for: Failed to stat file in error message
if (canFind(e.msg, "Failed to stat file")) {
// File system access issue
log.error("ERROR: The local file system returned an error with the following message:");
log.error(" Error Message: ", e.msg);
log.error("ACCESS ERROR: Please check your UID and GID access to this file, as the permissions on this file is preventing this application to read it");
log.error("\nFATAL: Exiting application to avoid deleting data due to local file system access issues\n");
addLogEntry("ERROR: The local file system returned an error with the following message:");
addLogEntry(" Error Message: " ~ e.msg);
addLogEntry("ACCESS ERROR: Please check your UID and GID access to this file, as the permissions on this file is preventing this application to read it");
addLogEntry("\nFATAL: Forcing exiting application to avoid deleting data due to local file system access issues\n");
// Must exit here
exit(-1);
} else {
@ -173,85 +323,47 @@ final class Monitor
}
}
private void add(string pathname)
{
int wd = inotify_add_watch(fd, toStringz(pathname), mask);
if (wd < 0) {
if (errno() == ENOSPC) {
log.log("The user limit on the total number of inotify watches has been reached.");
log.log("To see the current max number of watches run:");
log.log("sysctl fs.inotify.max_user_watches");
log.log("To change the current max number of watches to 524288 run:");
log.log("sudo sysctl fs.inotify.max_user_watches=524288");
}
if (errno() == 13) {
if ((selectiveSync.getSkipDotfiles()) && (selectiveSync.isDotFile(pathname))) {
// no misleading output that we could not add a watch due to permission denied
return;
} else {
log.vlog("WARNING: inotify_add_watch failed - permission denied: ", pathname);
return;
}
}
// Flag any other errors
log.error("ERROR: inotify_add_watch failed: ", pathname);
return;
}
// Add path to inotify watch - required regardless if a '.folder' or 'folder'
wdToDirName[wd] = buildNormalizedPath(pathname) ~ "/";
log.vdebug("inotify_add_watch successfully added for: ", pathname);
// Do we log that we are monitoring this directory?
if (isDir(pathname)) {
// This is a directory
// is the path exluded if skip_dotfiles configured and path is a .folder?
if ((selectiveSync.getSkipDotfiles()) && (selectiveSync.isDotFile(pathname))) {
// no misleading output that we are monitoring this directory
return;
}
// Log that this is directory is being monitored
log.vlog("Monitor directory: ", pathname);
}
}
// remove a watch descriptor
private void remove(int wd)
{
// Remove a watch descriptor
private void remove(int wd) {
assert(wd in wdToDirName);
int ret = inotify_rm_watch(fd, wd);
int ret = worker.remove(wd);
if (ret < 0) throw new MonitorException("inotify_rm_watch failed");
log.vlog("Monitored directory removed: ", wdToDirName[wd]);
addLogEntry("Monitored directory removed: " ~ to!string(wdToDirName[wd]), ["verbose"]);
wdToDirName.remove(wd);
}
// remove the watch descriptors associated to the given path
private void remove(const(char)[] path)
{
// Remove the watch descriptors associated to the given path
private void remove(const(char)[] path) {
path ~= "/";
foreach (wd, dirname; wdToDirName) {
if (dirname.startsWith(path)) {
int ret = inotify_rm_watch(fd, wd);
int ret = worker.remove(wd);
if (ret < 0) throw new MonitorException("inotify_rm_watch failed");
wdToDirName.remove(wd);
log.vlog("Monitored directory removed: ", dirname);
addLogEntry("Monitored directory removed: " ~ dirname, ["verbose"]);
}
}
}
// return the file path from an inotify event
private string getPath(const(inotify_event)* event)
{
// Return the file path from an inotify event
private string getPath(const(inotify_event)* event) {
string path = wdToDirName[event.wd];
if (event.len > 0) path ~= fromStringz(event.name.ptr);
log.vdebug("inotify path event for: ", path);
addLogEntry("inotify path event for: " ~ path, ["debug"]);
return path;
}
void update(bool useCallbacks = true)
{
shared(MonitorBackgroundWorker) getWorker() {
return worker;
}
// Update
void update(bool useCallbacks = true) {
if(!initialised)
return;
pollfd fds = {
fd: fd,
fd: worker.fd,
events: POLLIN
};
@ -260,7 +372,7 @@ final class Monitor
if (ret == -1) throw new MonitorException("poll failed");
else if (ret == 0) break; // no events available
size_t length = read(fd, buffer.ptr, buffer.length);
size_t length = read(worker.fd, buffer.ptr, buffer.length);
if (length == -1) throw new MonitorException("read failed");
int i = 0;
@ -268,35 +380,38 @@ final class Monitor
inotify_event *event = cast(inotify_event*) &buffer[i];
string path;
string evalPath;
// inotify event debug
log.vdebug("inotify event wd: ", event.wd);
log.vdebug("inotify event mask: ", event.mask);
log.vdebug("inotify event cookie: ", event.cookie);
log.vdebug("inotify event len: ", event.len);
log.vdebug("inotify event name: ", event.name);
if (event.mask & IN_ACCESS) log.vdebug("inotify event flag: IN_ACCESS");
if (event.mask & IN_MODIFY) log.vdebug("inotify event flag: IN_MODIFY");
if (event.mask & IN_ATTRIB) log.vdebug("inotify event flag: IN_ATTRIB");
if (event.mask & IN_CLOSE_WRITE) log.vdebug("inotify event flag: IN_CLOSE_WRITE");
if (event.mask & IN_CLOSE_NOWRITE) log.vdebug("inotify event flag: IN_CLOSE_NOWRITE");
if (event.mask & IN_MOVED_FROM) log.vdebug("inotify event flag: IN_MOVED_FROM");
if (event.mask & IN_MOVED_TO) log.vdebug("inotify event flag: IN_MOVED_TO");
if (event.mask & IN_CREATE) log.vdebug("inotify event flag: IN_CREATE");
if (event.mask & IN_DELETE) log.vdebug("inotify event flag: IN_DELETE");
if (event.mask & IN_DELETE_SELF) log.vdebug("inotify event flag: IN_DELETE_SELF");
if (event.mask & IN_MOVE_SELF) log.vdebug("inotify event flag: IN_MOVE_SELF");
if (event.mask & IN_UNMOUNT) log.vdebug("inotify event flag: IN_UNMOUNT");
if (event.mask & IN_Q_OVERFLOW) log.vdebug("inotify event flag: IN_Q_OVERFLOW");
if (event.mask & IN_IGNORED) log.vdebug("inotify event flag: IN_IGNORED");
if (event.mask & IN_CLOSE) log.vdebug("inotify event flag: IN_CLOSE");
if (event.mask & IN_MOVE) log.vdebug("inotify event flag: IN_MOVE");
if (event.mask & IN_ONLYDIR) log.vdebug("inotify event flag: IN_ONLYDIR");
if (event.mask & IN_DONT_FOLLOW) log.vdebug("inotify event flag: IN_DONT_FOLLOW");
if (event.mask & IN_EXCL_UNLINK) log.vdebug("inotify event flag: IN_EXCL_UNLINK");
if (event.mask & IN_MASK_ADD) log.vdebug("inotify event flag: IN_MASK_ADD");
if (event.mask & IN_ISDIR) log.vdebug("inotify event flag: IN_ISDIR");
if (event.mask & IN_ONESHOT) log.vdebug("inotify event flag: IN_ONESHOT");
if (event.mask & IN_ALL_EVENTS) log.vdebug("inotify event flag: IN_ALL_EVENTS");
addLogEntry("inotify event wd: " ~ to!string(event.wd), ["debug"]);
addLogEntry("inotify event mask: " ~ to!string(event.mask), ["debug"]);
addLogEntry("inotify event cookie: " ~ to!string(event.cookie), ["debug"]);
addLogEntry("inotify event len: " ~ to!string(event.len), ["debug"]);
addLogEntry("inotify event name: " ~ to!string(event.name), ["debug"]);
// inotify event handling
if (event.mask & IN_ACCESS) addLogEntry("inotify event flag: IN_ACCESS", ["debug"]);
if (event.mask & IN_MODIFY) addLogEntry("inotify event flag: IN_MODIFY", ["debug"]);
if (event.mask & IN_ATTRIB) addLogEntry("inotify event flag: IN_ATTRIB", ["debug"]);
if (event.mask & IN_CLOSE_WRITE) addLogEntry("inotify event flag: IN_CLOSE_WRITE", ["debug"]);
if (event.mask & IN_CLOSE_NOWRITE) addLogEntry("inotify event flag: IN_CLOSE_NOWRITE", ["debug"]);
if (event.mask & IN_MOVED_FROM) addLogEntry("inotify event flag: IN_MOVED_FROM", ["debug"]);
if (event.mask & IN_MOVED_TO) addLogEntry("inotify event flag: IN_MOVED_TO", ["debug"]);
if (event.mask & IN_CREATE) addLogEntry("inotify event flag: IN_CREATE", ["debug"]);
if (event.mask & IN_DELETE) addLogEntry("inotify event flag: IN_DELETE", ["debug"]);
if (event.mask & IN_DELETE_SELF) addLogEntry("inotify event flag: IN_DELETE_SELF", ["debug"]);
if (event.mask & IN_MOVE_SELF) addLogEntry("inotify event flag: IN_MOVE_SELF", ["debug"]);
if (event.mask & IN_UNMOUNT) addLogEntry("inotify event flag: IN_UNMOUNT", ["debug"]);
if (event.mask & IN_Q_OVERFLOW) addLogEntry("inotify event flag: IN_Q_OVERFLOW", ["debug"]);
if (event.mask & IN_IGNORED) addLogEntry("inotify event flag: IN_IGNORED", ["debug"]);
if (event.mask & IN_CLOSE) addLogEntry("inotify event flag: IN_CLOSE", ["debug"]);
if (event.mask & IN_MOVE) addLogEntry("inotify event flag: IN_MOVE", ["debug"]);
if (event.mask & IN_ONLYDIR) addLogEntry("inotify event flag: IN_ONLYDIR", ["debug"]);
if (event.mask & IN_DONT_FOLLOW) addLogEntry("inotify event flag: IN_DONT_FOLLOW", ["debug"]);
if (event.mask & IN_EXCL_UNLINK) addLogEntry("inotify event flag: IN_EXCL_UNLINK", ["debug"]);
if (event.mask & IN_MASK_ADD) addLogEntry("inotify event flag: IN_MASK_ADD", ["debug"]);
if (event.mask & IN_ISDIR) addLogEntry("inotify event flag: IN_ISDIR", ["debug"]);
if (event.mask & IN_ONESHOT) addLogEntry("inotify event flag: IN_ONESHOT", ["debug"]);
if (event.mask & IN_ALL_EVENTS) addLogEntry("inotify event flag: IN_ALL_EVENTS", ["debug"]);
// skip events that need to be ignored
if (event.mask & IN_IGNORED) {
@ -304,7 +419,7 @@ final class Monitor
wdToDirName.remove(event.wd);
goto skip;
} else if (event.mask & IN_Q_OVERFLOW) {
throw new MonitorException("Inotify overflow, events missing");
throw new MonitorException("inotify overflow, inotify events will be missing");
}
// if the event is not to be ignored, obtain path
@ -342,10 +457,10 @@ final class Monitor
// handle the inotify events
if (event.mask & IN_MOVED_FROM) {
log.vdebug("event IN_MOVED_FROM: ", path);
addLogEntry("event IN_MOVED_FROM: " ~ path, ["debug"]);
cookieToPath[event.cookie] = path;
} else if (event.mask & IN_MOVED_TO) {
log.vdebug("event IN_MOVED_TO: ", path);
addLogEntry("event IN_MOVED_TO: " ~ path, ["debug"]);
if (event.mask & IN_ISDIR) addRecursive(path);
auto from = event.cookie in cookieToPath;
if (from) {
@ -360,32 +475,43 @@ final class Monitor
}
}
} else if (event.mask & IN_CREATE) {
log.vdebug("event IN_CREATE: ", path);
addLogEntry("event IN_CREATE: " ~ path, ["debug"]);
if (event.mask & IN_ISDIR) {
addRecursive(path);
if (useCallbacks) onDirCreated(path);
}
} else if (event.mask & IN_DELETE) {
log.vdebug("event IN_DELETE: ", path);
addLogEntry("event IN_DELETE: " ~ path, ["debug"]);
if (useCallbacks) onDelete(path);
} else if ((event.mask & IN_CLOSE_WRITE) && !(event.mask & IN_ISDIR)) {
log.vdebug("event IN_CLOSE_WRITE and ...: ", path);
addLogEntry("event IN_CLOSE_WRITE and not IN_ISDIR: " ~ path, ["debug"]);
if (useCallbacks) onFileChanged(path);
} else {
log.vdebug("event unhandled: ", path);
addLogEntry("event unhandled: " ~ path, ["debug"]);
assert(0);
}
skip:
i += inotify_event.sizeof + event.len;
}
// assume that the items moved outside the watched directory have been deleted
// Assume that the items moved outside the watched directory have been deleted
foreach (cookie, path; cookieToPath) {
log.vdebug("deleting (post loop): ", path);
addLogEntry("Deleting cookie|watch (post loop): " ~ path, ["debug"]);
if (useCallbacks) onDelete(path);
remove(path);
cookieToPath.remove(cookie);
}
// Debug Log that all inotify events are flushed
addLogEntry("inotify events flushed", ["debug"]);
}
}
Tid watch() {
initialised = true;
return spawn(&startMonitorJob, worker, thisTid);
}
bool isWorking() {
return worker.isWorking();
}
}

File diff suppressed because it is too large Load diff

View file

@ -1,156 +0,0 @@
module progress;
import std.stdio;
import std.range;
import std.format;
import std.datetime;
import core.sys.posix.unistd;
import core.sys.posix.sys.ioctl;
class Progress
{
private:
immutable static size_t default_width = 80;
size_t max_width = 40;
size_t width = default_width;
ulong start_time;
string caption = "Progress";
size_t iterations;
size_t counter;
size_t getTerminalWidth() {
size_t column = default_width;
version (CRuntime_Musl) {
} else version(Android) {
} else {
winsize ws;
if(ioctl(STDOUT_FILENO, TIOCGWINSZ, &ws) != -1 && ws.ws_col > 0) {
column = ws.ws_col;
}
}
return column;
}
void clear() {
write("\r");
for(auto i = 0; i < width; i++) write(" ");
write("\r");
}
int calc_eta() {
immutable auto ratio = cast(double)counter / iterations;
auto current_time = Clock.currTime.toUnixTime();
auto duration = cast(int)(current_time - start_time);
int hours, minutes, seconds;
double elapsed = (current_time - start_time);
int eta_sec = cast(int)((elapsed / ratio) - elapsed);
// Return an ETA or Duration?
if (eta_sec != 0){
return eta_sec;
} else {
return duration;
}
}
string progressbarText(string header_text, string footer_text) {
immutable auto ratio = cast(double)counter / iterations;
string result = "";
double bar_length = width - header_text.length - footer_text.length;
if(bar_length > max_width && max_width > 0) {
bar_length = max_width;
}
size_t i = 0;
for(; i < ratio * bar_length; i++) result ~= "o";
for(; i < bar_length; i++) result ~= " ";
return header_text ~ result ~ footer_text;
}
void print() {
immutable auto ratio = cast(double)counter / iterations;
auto header = appender!string();
auto footer = appender!string();
header.formattedWrite("%s %3d%% |", caption, cast(int)(ratio * 100));
if(counter <= 0 || ratio == 0.0) {
footer.formattedWrite("| ETA --:--:--:");
} else {
int h, m, s;
dur!"seconds"(calc_eta())
.split!("hours", "minutes", "seconds")(h, m, s);
if (counter != iterations){
footer.formattedWrite("| ETA %02d:%02d:%02d ", h, m, s);
} else {
footer.formattedWrite("| DONE IN %02d:%02d:%02d ", h, m, s);
}
}
write(progressbarText(header.data, footer.data));
}
void update() {
width = getTerminalWidth();
clear();
print();
stdout.flush();
}
public:
this(size_t iterations) {
if(iterations <= 0) iterations = 1;
counter = -1;
this.iterations = iterations;
start_time = Clock.currTime.toUnixTime;
}
@property {
string title() { return caption; }
string title(string text) { return caption = text; }
}
@property {
size_t count() { return counter; }
size_t count(size_t val) {
if(val > iterations) val = iterations;
return counter = val;
}
}
@property {
size_t maxWidth() { return max_width; }
size_t maxWidth(size_t w) {
return max_width = w;
}
}
void reset() {
counter = -1;
start_time = Clock.currTime.toUnixTime;
}
void next() {
counter++;
if(counter > iterations) counter = iterations;
update();
}
}

View file

@ -1,7 +1,11 @@
// What is this module called?
module qxor;
// What does this module require to function?
import std.algorithm;
import std.digest;
// implementation of the QuickXorHash algorithm in D
// Implementation of the QuickXorHash algorithm in D
// https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/code-snippets/quickxorhash.md
struct QuickXor
{
@ -71,18 +75,4 @@ struct QuickXor
}
return tmp;
}
}
unittest
{
assert(isDigest!QuickXor);
}
unittest
{
QuickXor qxor;
qxor.put(cast(ubyte[]) "The quick brown fox jumps over the lazy dog");
assert(qxor.finish().toHexString() == "6CC4A56F2B26C492FA4BBE57C1F31C4193A972BE");
}
alias QuickXorDigest = WrapperDigest!(QuickXor);
}

View file

@ -1,422 +0,0 @@
import std.algorithm;
import std.array;
import std.file;
import std.path;
import std.regex;
import std.stdio;
import std.string;
import util;
import log;
final class SelectiveSync
{
private string[] paths;
private string[] businessSharedFoldersList;
private Regex!char mask;
private Regex!char dirmask;
private bool skipDirStrictMatch = false;
private bool skipDotfiles = false;
// load sync_list file
void load(string filepath)
{
if (exists(filepath)) {
// open file as read only
auto file = File(filepath, "r");
auto range = file.byLine();
foreach (line; range) {
// Skip comments in file
if (line.length == 0 || line[0] == ';' || line[0] == '#') continue;
paths ~= buildNormalizedPath(line);
}
file.close();
}
}
// Configure skipDirStrictMatch if function is called
// By default, skipDirStrictMatch = false;
void setSkipDirStrictMatch()
{
skipDirStrictMatch = true;
}
// load business_shared_folders file
void loadSharedFolders(string filepath)
{
if (exists(filepath)) {
// open file as read only
auto file = File(filepath, "r");
auto range = file.byLine();
foreach (line; range) {
// Skip comments in file
if (line.length == 0 || line[0] == ';' || line[0] == '#') continue;
businessSharedFoldersList ~= buildNormalizedPath(line);
}
file.close();
}
}
void setFileMask(const(char)[] mask)
{
this.mask = wild2regex(mask);
}
void setDirMask(const(char)[] dirmask)
{
this.dirmask = wild2regex(dirmask);
}
// Configure skipDotfiles if function is called
// By default, skipDotfiles = false;
void setSkipDotfiles()
{
skipDotfiles = true;
}
// return value of skipDotfiles
bool getSkipDotfiles()
{
return skipDotfiles;
}
// config file skip_dir parameter
bool isDirNameExcluded(string name)
{
// Does the directory name match skip_dir config entry?
// Returns true if the name matches a skip_dir config entry
// Returns false if no match
log.vdebug("skip_dir evaluation for: ", name);
// Try full path match first
if (!name.matchFirst(dirmask).empty) {
log.vdebug("'!name.matchFirst(dirmask).empty' returned true = matched");
return true;
} else {
// Do we check the base name as well?
if (!skipDirStrictMatch) {
log.vdebug("No Strict Matching Enforced");
// Test the entire path working backwards from child
string path = buildNormalizedPath(name);
string checkPath;
auto paths = pathSplitter(path);
foreach_reverse(directory; paths) {
if (directory != "/") {
// This will add a leading '/' but that needs to be stripped to check
checkPath = "/" ~ directory ~ checkPath;
if(!checkPath.strip('/').matchFirst(dirmask).empty) {
log.vdebug("'!checkPath.matchFirst(dirmask).empty' returned true = matched");
return true;
}
}
}
} else {
log.vdebug("Strict Matching Enforced - No Match");
}
}
// no match
return false;
}
// config file skip_file parameter
bool isFileNameExcluded(string name)
{
// Does the file name match skip_file config entry?
// Returns true if the name matches a skip_file config entry
// Returns false if no match
log.vdebug("skip_file evaluation for: ", name);
// Try full path match first
if (!name.matchFirst(mask).empty) {
return true;
} else {
// check just the file name
string filename = baseName(name);
if(!filename.matchFirst(mask).empty) {
return true;
}
}
// no match
return false;
}
// Match against sync_list only
bool isPathExcludedViaSyncList(string path)
{
// Debug output that we are performing a 'sync_list' inclusion / exclusion test
return .isPathExcluded(path, paths);
}
// Match against skip_dir, skip_file & sync_list entries
bool isPathExcludedMatchAll(string path)
{
return .isPathExcluded(path, paths) || .isPathMatched(path, mask) || .isPathMatched(path, dirmask);
}
// is the path a dotfile?
bool isDotFile(string path)
{
// always allow the root
if (path == ".") return false;
path = buildNormalizedPath(path);
auto paths = pathSplitter(path);
foreach(base; paths) {
if (startsWith(base, ".")){
return true;
}
}
return false;
}
// is business shared folder matched
bool isSharedFolderMatched(string name)
{
// if there are no shared folder always return false
if (businessSharedFoldersList.empty) return false;
if (!name.matchFirst(businessSharedFoldersList).empty) {
return true;
} else {
// try a direct comparison just in case
foreach (userFolder; businessSharedFoldersList) {
if (userFolder == name) {
// direct match
log.vdebug("'matchFirst' failed to match, however direct comparison was matched: ", name);
return true;
}
}
return false;
}
}
// is business shared folder included
bool isPathIncluded(string path, string[] allowedPaths)
{
// always allow the root
if (path == ".") return true;
// if there are no allowed paths always return true
if (allowedPaths.empty) return true;
path = buildNormalizedPath(path);
foreach (allowed; allowedPaths) {
auto comm = commonPrefix(path, allowed);
if (comm.length == path.length) {
// the given path is contained in an allowed path
return true;
}
if (comm.length == allowed.length && path[comm.length] == '/') {
// the given path is a subitem of an allowed path
return true;
}
}
return false;
}
}
// test if the given path is not included in the allowed paths
// if there are no allowed paths always return false
private bool isPathExcluded(string path, string[] allowedPaths)
{
// function variables
bool exclude = false;
bool exludeDirectMatch = false; // will get updated to true, if there is a pattern match to sync_list entry
bool excludeMatched = false; // will get updated to true, if there is a pattern match to sync_list entry
bool finalResult = true; // will get updated to false, if pattern match to sync_list entry
int offset;
string wildcard = "*";
// always allow the root
if (path == ".") return false;
// if there are no allowed paths always return false
if (allowedPaths.empty) return false;
path = buildNormalizedPath(path);
log.vdebug("Evaluation against 'sync_list' for this path: ", path);
log.vdebug("[S]exclude = ", exclude);
log.vdebug("[S]exludeDirectMatch = ", exludeDirectMatch);
log.vdebug("[S]excludeMatched = ", excludeMatched);
// unless path is an exact match, entire sync_list entries need to be processed to ensure
// negative matches are also correctly detected
foreach (allowedPath; allowedPaths) {
// is this an inclusion path or finer grained exclusion?
switch (allowedPath[0]) {
case '-':
// sync_list path starts with '-', this user wants to exclude this path
exclude = true;
// If the sync_list entry starts with '-/' offset needs to be 2, else 1
if (startsWith(allowedPath, "-/")){
// Offset needs to be 2
offset = 2;
} else {
// Offset needs to be 1
offset = 1;
}
break;
case '!':
// sync_list path starts with '!', this user wants to exclude this path
exclude = true;
// If the sync_list entry starts with '!/' offset needs to be 2, else 1
if (startsWith(allowedPath, "!/")){
// Offset needs to be 2
offset = 2;
} else {
// Offset needs to be 1
offset = 1;
}
break;
case '/':
// sync_list path starts with '/', this user wants to include this path
// but a '/' at the start causes matching issues, so use the offset for comparison
exclude = false;
offset = 1;
break;
default:
// no negative pattern, default is to not exclude
exclude = false;
offset = 0;
}
// What are we comparing against?
log.vdebug("Evaluation against 'sync_list' entry: ", allowedPath);
// Generate the common prefix from the path vs the allowed path
auto comm = commonPrefix(path, allowedPath[offset..$]);
// Is path is an exact match of the allowed path?
if (comm.length == path.length) {
// we have a potential exact match
// strip any potential '/*' from the allowed path, to avoid a potential lesser common match
string strippedAllowedPath = strip(allowedPath[offset..$], "/*");
if (path == strippedAllowedPath) {
// we have an exact path match
log.vdebug("exact path match");
if (!exclude) {
log.vdebug("Evaluation against 'sync_list' result: direct match");
finalResult = false;
// direct match, break and go sync
break;
} else {
log.vdebug("Evaluation against 'sync_list' result: direct match - path to be excluded");
// do not set excludeMatched = true here, otherwise parental path also gets excluded
// flag exludeDirectMatch so that a 'wildcard match' will not override this exclude
exludeDirectMatch = true;
// final result
finalResult = true;
}
} else {
// no exact path match, but something common does match
log.vdebug("something 'common' matches the input path");
auto splitAllowedPaths = pathSplitter(strippedAllowedPath);
string pathToEvaluate = "";
foreach(base; splitAllowedPaths) {
pathToEvaluate ~= base;
if (path == pathToEvaluate) {
// The input path matches what we want to evaluate against as a direct match
if (!exclude) {
log.vdebug("Evaluation against 'sync_list' result: direct match for parental path item");
finalResult = false;
// direct match, break and go sync
break;
} else {
log.vdebug("Evaluation against 'sync_list' result: direct match for parental path item but to be excluded");
finalResult = true;
// do not set excludeMatched = true here, otherwise parental path also gets excluded
}
}
pathToEvaluate ~= dirSeparator;
}
}
}
// Is path is a subitem/sub-folder of the allowed path?
if (comm.length == allowedPath[offset..$].length) {
// The given path is potentially a subitem of an allowed path
// We want to capture sub-folders / files of allowed paths here, but not explicitly match other items
// if there is no wildcard
auto subItemPathCheck = allowedPath[offset..$] ~ "/";
if (canFind(path, subItemPathCheck)) {
// The 'path' includes the allowed path, and is 'most likely' a sub-path item
if (!exclude) {
log.vdebug("Evaluation against 'sync_list' result: parental path match");
finalResult = false;
// parental path matches, break and go sync
break;
} else {
log.vdebug("Evaluation against 'sync_list' result: parental path match but must be excluded");
finalResult = true;
excludeMatched = true;
}
}
}
// Does the allowed path contain a wildcard? (*)
if (canFind(allowedPath[offset..$], wildcard)) {
// allowed path contains a wildcard
// manually replace '*' for '.*' to be compatible with regex
string regexCompatiblePath = replace(allowedPath[offset..$], "*", ".*");
auto allowedMask = regex(regexCompatiblePath);
if (matchAll(path, allowedMask)) {
// regex wildcard evaluation matches
// if we have a prior pattern match for an exclude, excludeMatched = true
if (!exclude && !excludeMatched && !exludeDirectMatch) {
// nothing triggered an exclusion before evaluation against wildcard match attempt
log.vdebug("Evaluation against 'sync_list' result: wildcard pattern match");
finalResult = false;
} else {
log.vdebug("Evaluation against 'sync_list' result: wildcard pattern matched but must be excluded");
finalResult = true;
excludeMatched = true;
}
}
}
}
// Interim results
log.vdebug("[F]exclude = ", exclude);
log.vdebug("[F]exludeDirectMatch = ", exludeDirectMatch);
log.vdebug("[F]excludeMatched = ", excludeMatched);
// If exclude or excludeMatched is true, then finalResult has to be true
if ((exclude) || (excludeMatched) || (exludeDirectMatch)) {
finalResult = true;
}
// results
if (finalResult) {
log.vdebug("Evaluation against 'sync_list' final result: EXCLUDED");
} else {
log.vdebug("Evaluation against 'sync_list' final result: included for sync");
}
return finalResult;
}
// test if the given path is matched by the regex expression.
// recursively test up the tree.
private bool isPathMatched(string path, Regex!char mask) {
path = buildNormalizedPath(path);
auto paths = pathSplitter(path);
string prefix = "";
foreach(base; paths) {
prefix ~= base;
if (!path.matchFirst(mask).empty) {
// the given path matches something which we should skip
return true;
}
prefix ~= dirSeparator;
}
return false;
}
// unit tests
unittest
{
assert(isPathExcluded("Documents2", ["Documents"]));
assert(!isPathExcluded("Documents", ["Documents"]));
assert(!isPathExcluded("Documents/a.txt", ["Documents"]));
assert(isPathExcluded("Hello/World", ["Hello/John"]));
assert(!isPathExcluded(".", ["Documents"]));
}

View file

@ -1,27 +1,29 @@
// What is this module called?
module sqlite;
// What does this module require to function?
import std.stdio;
import etc.c.sqlite3;
import std.string: fromStringz, toStringz;
import core.stdc.stdlib;
import std.conv;
static import log;
// What other modules that we have created do we need to import?
import log;
extern (C) immutable(char)* sqlite3_errstr(int); // missing from the std library
static this()
{
static this() {
if (sqlite3_libversion_number() < 3006019) {
throw new SqliteException("sqlite 3.6.19 or newer is required");
}
}
private string ifromStringz(const(char)* cstr)
{
private string ifromStringz(const(char)* cstr) {
return fromStringz(cstr).dup;
}
class SqliteException: Exception
{
class SqliteException: Exception {
@safe pure nothrow this(string msg, string file = __FILE__, size_t line = __LINE__, Throwable next = null)
{
super(msg, file, line, next);
@ -33,68 +35,67 @@ class SqliteException: Exception
}
}
struct Database
{
struct Database {
private sqlite3* pDb;
this(const(char)[] filename)
{
this(const(char)[] filename) {
open(filename);
}
~this()
{
~this() {
close();
}
int db_checkpoint()
{
int db_checkpoint() {
return sqlite3_wal_checkpoint(pDb, null);
}
void dump_open_statements()
{
log.log("Dumpint open statements: \n");
void dump_open_statements() {
addLogEntry("Dumping open statements:", ["debug"]);
auto p = sqlite3_next_stmt(pDb, null);
while (p != null) {
log.log (" - " ~ ifromStringz(sqlite3_sql(p)) ~ "\n");
addLogEntry(" - " ~ to!string(ifromStringz(sqlite3_sql(p))));
p = sqlite3_next_stmt(pDb, p);
}
}
void open(const(char)[] filename)
{
void open(const(char)[] filename) {
// https://www.sqlite.org/c3ref/open.html
int rc = sqlite3_open(toStringz(filename), &pDb);
if (rc == SQLITE_CANTOPEN) {
// Database cannot be opened
log.error("\nThe database cannot be opened. Please check the permissions of ~/.config/onedrive/items.sqlite3\n");
addLogEntry();
addLogEntry("The database cannot be opened. Please check the permissions of " ~ to!string(filename));
addLogEntry();
close();
exit(-1);
}
if (rc != SQLITE_OK) {
log.error("\nA database access error occurred: " ~ getErrorMessage() ~ "\n");
addLogEntry();
addLogEntry("A database access error occurred: " ~ getErrorMessage());
addLogEntry();
close();
exit(-1);
}
sqlite3_extended_result_codes(pDb, 1); // always use extended result codes
}
void exec(const(char)[] sql)
{
void exec(const(char)[] sql) {
// https://www.sqlite.org/c3ref/exec.html
int rc = sqlite3_exec(pDb, toStringz(sql), null, null, null);
if (rc != SQLITE_OK) {
log.error("\nA database execution error occurred: "~ getErrorMessage() ~ "\n");
log.error("Please retry your command with --resync to fix any local database corruption issues.\n");
addLogEntry();
addLogEntry("A database execution error occurred: "~ getErrorMessage());
addLogEntry();
addLogEntry("Please retry your command with --resync to fix any local database corruption issues.");
addLogEntry();
close();
exit(-1);
}
}
int getVersion()
{
int getVersion() {
int userVersion;
extern (C) int callback(void* user_version, int count, char** column_text, char** column_name) {
import core.stdc.stdlib: atoi;
@ -107,20 +108,23 @@ struct Database
}
return userVersion;
}
int getThreadsafeValue() {
// Get the threadsafe value
auto threadsafeValue = sqlite3_threadsafe();
return threadsafeValue;
}
string getErrorMessage()
{
string getErrorMessage() {
return ifromStringz(sqlite3_errmsg(pDb));
}
void setVersion(int userVersion)
{
void setVersion(int userVersion) {
import std.conv: to;
exec("PRAGMA user_version=" ~ to!string(userVersion));
}
Statement prepare(const(char)[] zSql)
{
Statement prepare(const(char)[] zSql) {
Statement s;
// https://www.sqlite.org/c3ref/prepare.html
int rc = sqlite3_prepare_v2(pDb, zSql.ptr, cast(int) zSql.length, &s.pStmt, null);
@ -130,46 +134,39 @@ struct Database
return s;
}
void close()
{
void close() {
// https://www.sqlite.org/c3ref/close.html
sqlite3_close_v2(pDb);
pDb = null;
}
}
struct Statement
{
struct Result
{
struct Statement {
struct Result {
private sqlite3_stmt* pStmt;
private const(char)[][] row;
private this(sqlite3_stmt* pStmt)
{
private this(sqlite3_stmt* pStmt) {
this.pStmt = pStmt;
step(); // initialize the range
}
@property bool empty()
{
@property bool empty() {
return row.length == 0;
}
@property auto front()
{
@property auto front() {
return row;
}
alias step popFront;
void step()
{
void step() {
// https://www.sqlite.org/c3ref/step.html
int rc = sqlite3_step(pStmt);
if (rc == SQLITE_BUSY) {
// Database is locked by another onedrive process
log.error("The database is currently locked by another process - cannot sync");
addLogEntry("The database is currently locked by another process - cannot sync");
return;
}
if (rc == SQLITE_DONE) {
@ -185,8 +182,11 @@ struct Statement
}
} else {
string errorMessage = ifromStringz(sqlite3_errmsg(sqlite3_db_handle(pStmt)));
log.error("\nA database statement execution error occurred: "~ errorMessage ~ "\n");
log.error("Please retry your command with --resync to fix any local database corruption issues.\n");
addLogEntry();
addLogEntry("A database statement execution error occurred: "~ errorMessage);
addLogEntry();
addLogEntry("Please retry your command with --resync to fix any local database corruption issues.");
addLogEntry();
exit(-1);
}
}
@ -194,14 +194,12 @@ struct Statement
private sqlite3_stmt* pStmt;
~this()
{
~this() {
// https://www.sqlite.org/c3ref/finalize.html
sqlite3_finalize(pStmt);
}
void bind(int index, const(char)[] value)
{
void bind(int index, const(char)[] value) {
reset();
// https://www.sqlite.org/c3ref/bind_blob.html
int rc = sqlite3_bind_text(pStmt, index, value.ptr, cast(int) value.length, SQLITE_STATIC);
@ -210,47 +208,16 @@ struct Statement
}
}
Result exec()
{
Result exec() {
reset();
return Result(pStmt);
}
private void reset()
{
private void reset() {
// https://www.sqlite.org/c3ref/reset.html
int rc = sqlite3_reset(pStmt);
if (rc != SQLITE_OK) {
throw new SqliteException(ifromStringz(sqlite3_errmsg(sqlite3_db_handle(pStmt))));
}
}
}
unittest
{
auto db = Database(":memory:");
db.exec("CREATE TABLE test(
id TEXT PRIMARY KEY,
value TEXT
)");
assert(db.getVersion() == 0);
db.setVersion(1);
assert(db.getVersion() == 1);
auto s = db.prepare("INSERT INTO test VALUES (?, ?)");
s.bind(1, "key1");
s.bind(2, "value");
s.exec();
s.bind(1, "key2");
s.bind(2, null);
s.exec();
s = db.prepare("SELECT * FROM test ORDER BY id ASC");
auto r = s.exec();
assert(r.front[0] == "key1");
r.popFront();
assert(r.front[1] == null);
r.popFront();
assert(r.empty);
}
}

13458
src/sync.d

File diff suppressed because it is too large Load diff

View file

@ -1,302 +0,0 @@
import std.algorithm, std.conv, std.datetime, std.file, std.json;
import std.stdio, core.thread, std.string;
import progress, onedrive, util;
static import log;
private long fragmentSize = 10 * 2^^20; // 10 MiB
struct UploadSession
{
private OneDriveApi onedrive;
private bool verbose;
// https://dev.onedrive.com/resources/uploadSession.htm
private JSONValue session;
// path where to save the session
private string sessionFilePath;
this(OneDriveApi onedrive, string sessionFilePath)
{
assert(onedrive);
this.onedrive = onedrive;
this.sessionFilePath = sessionFilePath;
this.verbose = verbose;
}
JSONValue upload(string localPath, const(char)[] parentDriveId, const(char)[] parentId, const(char)[] filename, const(char)[] eTag = null)
{
// Fix https://github.com/abraunegg/onedrive/issues/2
// More Details https://github.com/OneDrive/onedrive-api-docs/issues/778
SysTime localFileLastModifiedTime = timeLastModified(localPath).toUTC();
localFileLastModifiedTime.fracSecs = Duration.zero;
JSONValue fileSystemInfo = [
"item": JSONValue([
"@name.conflictBehavior": JSONValue("replace"),
"fileSystemInfo": JSONValue([
"lastModifiedDateTime": localFileLastModifiedTime.toISOExtString()
])
])
];
// Try to create the upload session for this file
session = onedrive.createUploadSession(parentDriveId, parentId, filename, eTag, fileSystemInfo);
if ("uploadUrl" in session){
session["localPath"] = localPath;
save();
return upload();
} else {
// there was an error
log.vlog("Create file upload session failed ... skipping file upload");
// return upload() will return a JSONValue response, create an empty JSONValue response to return
JSONValue response;
return response;
}
}
/* Restore the previous upload session.
* Returns true if the session is valid. Call upload() to resume it.
* Returns false if there is no session or the session is expired. */
bool restore()
{
if (exists(sessionFilePath)) {
log.vlog("Trying to restore the upload session ...");
// We cant use JSONType.object check, as this is currently a string
// We cant use a try & catch block, as it does not catch std.json.JSONException
auto sessionFileText = readText(sessionFilePath);
if(canFind(sessionFileText,"@odata.context")) {
session = readText(sessionFilePath).parseJSON();
} else {
log.vlog("Upload session resume data is invalid");
remove(sessionFilePath);
return false;
}
// Check the session resume file for expirationDateTime
if ("expirationDateTime" in session){
// expirationDateTime in the file
auto expiration = SysTime.fromISOExtString(session["expirationDateTime"].str);
if (expiration < Clock.currTime()) {
log.vlog("The upload session is expired");
return false;
}
if (!exists(session["localPath"].str)) {
log.vlog("The file does not exist anymore");
return false;
}
// Can we read the file - as a permissions issue or file corruption will cause a failure on resume
// https://github.com/abraunegg/onedrive/issues/113
if (readLocalFile(session["localPath"].str)){
// able to read the file
// request the session status
JSONValue response;
try {
response = onedrive.requestUploadStatus(session["uploadUrl"].str);
} catch (OneDriveException e) {
// handle any onedrive error response
if (e.httpStatusCode == 400) {
log.vlog("Upload session not found");
return false;
}
}
// do we have a valid response from OneDrive?
if (response.type() == JSONType.object){
// JSON object
if (("expirationDateTime" in response) && ("nextExpectedRanges" in response)){
// has the elements we need
session["expirationDateTime"] = response["expirationDateTime"];
session["nextExpectedRanges"] = response["nextExpectedRanges"];
if (session["nextExpectedRanges"].array.length == 0) {
log.vlog("The upload session is completed");
return false;
}
} else {
// bad data
log.vlog("Restore file upload session failed - invalid data response from OneDrive");
if (exists(sessionFilePath)) {
remove(sessionFilePath);
}
return false;
}
} else {
// not a JSON object
log.vlog("Restore file upload session failed - invalid response from OneDrive");
if (exists(sessionFilePath)) {
remove(sessionFilePath);
}
return false;
}
return true;
} else {
// unable to read the local file
log.vlog("Restore file upload session failed - unable to read the local file");
if (exists(sessionFilePath)) {
remove(sessionFilePath);
}
return false;
}
} else {
// session file contains an error - cant resume
log.vlog("Restore file upload session failed - cleaning up session resume");
if (exists(sessionFilePath)) {
remove(sessionFilePath);
}
return false;
}
}
return false;
}
JSONValue upload()
{
// Response for upload
JSONValue response;
// session JSON needs to contain valid elements
long offset;
long fileSize;
if ("nextExpectedRanges" in session){
offset = session["nextExpectedRanges"][0].str.splitter('-').front.to!long;
}
if ("localPath" in session){
fileSize = getSize(session["localPath"].str);
}
if ("uploadUrl" in session){
// Upload file via session created
// Upload Progress Bar
size_t iteration = (roundTo!int(double(fileSize)/double(fragmentSize)))+1;
Progress p = new Progress(iteration);
p.title = "Uploading";
long fragmentCount = 0;
long fragSize = 0;
// Initialise the download bar at 0%
p.next();
while (true) {
fragmentCount++;
log.vdebugNewLine("Fragment: ", fragmentCount, " of ", iteration);
p.next();
log.vdebugNewLine("fragmentSize: ", fragmentSize, "offset: ", offset, " fileSize: ", fileSize );
fragSize = fragmentSize < fileSize - offset ? fragmentSize : fileSize - offset;
log.vdebugNewLine("Using fragSize: ", fragSize);
// fragSize must not be a negative value
if (fragSize < 0) {
// Session upload will fail
// not a JSON object - fragment upload failed
log.vlog("File upload session failed - invalid calculation of fragment size");
if (exists(sessionFilePath)) {
remove(sessionFilePath);
}
// set response to null as error
response = null;
return response;
}
// If the resume upload fails, we need to check for a return code here
try {
response = onedrive.uploadFragment(
session["uploadUrl"].str,
session["localPath"].str,
offset,
fragSize,
fileSize
);
} catch (OneDriveException e) {
// if a 100 response is generated, continue
if (e.httpStatusCode == 100) {
continue;
}
// there was an error response from OneDrive when uploading the file fragment
// handle 'HTTP request returned status code 429 (Too Many Requests)' first
if (e.httpStatusCode == 429) {
auto retryAfterValue = onedrive.getRetryAfterValue();
log.vdebug("Fragment upload failed - received throttle request response from OneDrive");
log.vdebug("Using Retry-After Value = ", retryAfterValue);
// Sleep thread as per request
log.log("\nThread sleeping due to 'HTTP request returned status code 429' - The request has been throttled");
log.log("Sleeping for ", retryAfterValue, " seconds");
Thread.sleep(dur!"seconds"(retryAfterValue));
log.log("Retrying fragment upload");
} else {
// insert a new line as well, so that the below error is inserted on the console in the right location
log.vlog("\nFragment upload failed - received an exception response from OneDrive");
// display what the error is
displayOneDriveErrorMessage(e.msg, getFunctionName!({}));
// retry fragment upload in case error is transient
log.vlog("Retrying fragment upload");
}
try {
response = onedrive.uploadFragment(
session["uploadUrl"].str,
session["localPath"].str,
offset,
fragSize,
fileSize
);
} catch (OneDriveException e) {
// OneDrive threw another error on retry
log.vlog("Retry to upload fragment failed");
// display what the error is
displayOneDriveErrorMessage(e.msg, getFunctionName!({}));
// set response to null as the fragment upload was in error twice
response = null;
}
}
// was the fragment uploaded without issue?
if (response.type() == JSONType.object){
offset += fragmentSize;
if (offset >= fileSize) break;
// update the session details
session["expirationDateTime"] = response["expirationDateTime"];
session["nextExpectedRanges"] = response["nextExpectedRanges"];
save();
} else {
// not a JSON object - fragment upload failed
log.vlog("File upload session failed - invalid response from OneDrive");
if (exists(sessionFilePath)) {
remove(sessionFilePath);
}
// set response to null as error
response = null;
return response;
}
}
// upload complete
p.next();
writeln();
if (exists(sessionFilePath)) {
remove(sessionFilePath);
}
return response;
} else {
// session elements were not present
log.vlog("Session has no valid upload URL ... skipping this file upload");
// return an empty JSON response
response = null;
return response;
}
}
string getUploadSessionLocalFilePath() {
// return the session file path
string localPath = "";
if ("localPath" in session){
localPath = session["localPath"].str;
}
return localPath;
}
// save session details to temp file
private void save()
{
std.file.write(sessionFilePath, session.toString());
}
}

1079
src/util.d

File diff suppressed because it is too large Load diff