Age | Commit message (Collapse) | Author |
|
It now derives from IdStorage, so lots of GoogleDrive*Request classes
are removed and replaced with generic IdStorage*Request ones.
|
|
|
|
Used for downloading files in Box.
|
|
Box gets createDirectoryWithParentId(), so now creating directories
works there.
|
|
This is a special base class for Storages which are using ids instead of
paths in their APIs, like Box or Google Drive.
This commit makes Box derived from IdStorage.
|
|
And used in it BoxResolveIdRequest.
TODO: make some generic ResolveIdRequest and ListDirectoryRequest for
id-based storages. It's really similar, I just had to change a few
details in GoogleDrive ListDirectory and ResolveId requests.
|
|
Similarly to Google Drive, Box uses only ids of files. That means id
resolving would be slow.
|
|
BoxTokenRefresher does refresh if HTTP 401 is returned by the server.
To test refresher, BoxStorage::info() was added.
|
|
|
|
Simplified some stuff here and there by using HandlerUtils static
methods.
|
|
Now Client reads the first headers block, then LocalWebserver decides
which Handler to use. In case of "/upload", UploadFileHandler is used.
But now it only knows the "path" parameter. If that's valid, actual
UploadFileClientHandler is created, which reads the contents of the
request and, when finds there an "upload_file" field, starts saving it
in the directory specified by "path".
With that we don't need temp files approach from Reader class.
|
|
|
|
|
|
Now one can download files from the device through browser!
|
|
It does redirect to "/files" on success, so user doesn't even see the
strange "/create" URL at all.
This commit is for keeping these handlers small, not making one
(FilesPageHandler in this case) do everything.
|
|
Its handlers are now more compact. This commit moves Handler classes in
handlers\ directory.
ResourceHandler ignores "hidden" files in the archive, and these are
used as markup templates in IndexPageHandler and FilesPageHandler.
|
|
In most cases that's the right one to check. USE_CLOUD is defined when
either USE_LIBCURL or USE_SDL_NET are, which means if there is no curl,
USE_CLOUD still could be defined and linking errors would appear.
|
|
This commit also adds LocalWebserver's stopOnIdle().
That means server is not stopped immediately, but only when all clients
are served.
|
|
That ClientHandler is made for responding GET requests. It calculates
stream's length, it allows to specify response code and headers, it can
be used to transfer any ReadStream.
|
|
Keeps current client's state
|
|
Available as LocalServer singleton. It's being started and stopped by
StorageWizardDialog. It doesn't handle clients yet, though.
|
|
|
|
|
|
Includes NetworkReadStream PATCH method and Headers remembering feature.
|
|
GoogleDriveDownloadRequest, which resolves file id and then downloads it
with GoogleDriveStorage::downloadById().
GoogleDriveStreamFileRequest, which resolves file id and then returns
file stream with GoogleDriveStorage::streamFileById().
This commit also adds GoogleDriveStorage::streamFileById() itself.
A minor GoogleDriveResolveIdRequest fix added.
With these one can download files from Google Drive.
|
|
When listing directories, you get a list of StorageFiles, which path()
is actually Google Drive id. Thus, if you list a directory recursively,
you won't be able to determine whether all files are within one
directory or have some hierarchy. I'd fix that as soon as it would be
needed.
|
|
Now we can create directories in Google Drive by path, not parent id +
directory name!
|
|
GoogleDriveResolveIdRequest gets a lowercase path and searches for the
specified file's id. To do that it lists path's subdirectories one by
one with GoogleDriveListDirectoryByIdRequest.
GoogleDriveListDirectoryByIdRequest gets a Google Drive id and lists
that directory (not recursively).
|
|
To achieve smoother animation, ConnectionManager's timer now is 20 times
more frequent.
I'm encountering some strange libcurl.dll segfault problem when I close
the application while some Requests are active. It's not
CloudIcon-related, so it's more likely related to this 20 FPS timer.
This problem shows up only in Visual Studio for me.
|
|
It has its own GoogleDriveTokenRefresher and knows how to do info().
This commit also contains JSON int -> long long int fix and
CurlJsonRequest '\n' -> ' ' fix.
|
|
It's needed to ::destroy() it in main().
|
|
|
|
Also add CloudManager::testFeature(), because syncSaves() now works fine
and I don't want to break it again and again with my testing requests.
|
|
Doesn't support server's requested ranges yet.
Commit also adds some PUT-related code in NetworkReadStream and
CurlRequest.
|
|
Added ErrorResponse and ErrorCallback. Each Request now has an
ErrorCallback, which should be called instead of usual callback in case
of failure.
|
|
|
|
Never tested it, actually. It requires Storage to implement upload()
method and me to find some way to get saves' ReadStream.
The saveTimestamps() and loadTimestamps() part should be tested, other
parts should work fine.
|
|
Works as charm.
|
|
Uses Storage's listDirectory() and download() methods to download
contents.
|
|
OneDriveTokenRefresher is a CurlJsonRequest replacement for
OneDriveStorage methods. It behaves very similarly, but checks received
JSON before passing it to user. If it contains "error" key, it attempts
to refresh the token through OneDriveStorage, and then restarts the
original request, so user won't notice that there ever was an error.
|
|
Knows how to OAuth already.
This commit also adds CloudManager::addStorage(), so OneDriveStorage can
add newly created Storage and CloudManager can save it in the
configuration file.
|
|
It reads the passed NetworkReadStream and prints its contents onto
console (for now). It would be writing contents into file.
To simplify work with raw NetworkReadStream there is a new CurlRequest.
It basically does nothing, but as ConnMan handles transfers only if
there is an active Request, you need some Request to get
NetworkReadStream working. Thus, there is a CurlRequest, which is active
until NetworkReadStream is completely read. CurlRequest also has useful
addHeader() and addPostField() methods in order to customize the request
easily. Use execute() method to get its NetworkReadStream.
DropboxStorage implements streamFile() and download() API methods. As
DownloadRequest is incomplete, it is not actually downloading a file,
though.
|
|
Does multiple CurlJsonRequests while Dropbox returns "has_more" = true.
|
|
And do some minor cleanup work.
|
|
Tried to compile these two last commits with GCC and found a few minor
problems.
|
|
With ConnectionManager singleton one can start their Requests without
creating Storage instance. Moreover, Storage instance should contain
cloud API, not Requests-related handling and timer starting methods.
Thus, these methods were moved into ConnectionManager itself.
|
|
Now we can do REST API request by creating CurlJsonRequest and waiting
for it to call our callback. Passed pointer is Common::JSONValue.
This commit also does some minor variable renaming fixes.
|
|
Now it is based on MemoryReadWriteStream, which is introduced by this
commit. This stream is using ring buffer and is dynamically increasing
its size when necessary.
|
|
NetworkReadStream actually saves whole response in the memory now.
There is a pause mechanism in libcurl, but if libcurl is requesting
something compressed, it would have to uncompress data as it goes even
if we paused the request. Even though our own stream won't be notified
about this data when when "pause" the request, libcurl's own buffer
wound be expanding.
|
|
CurlRequest uses own multi_handle, in which it creates an easy_handle to
make a request.
Every time `handle()` is called it checks whether request is complete
and, if it is, stops.
|