You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
114 lines
5.1 KiB
114 lines
5.1 KiB
FOR VERSION 0.2
|
|
|
|
- use memNewRef instead of memcpy wherever possible. This should
|
|
improve the performance again. On the other hand. Don't stop
|
|
thinking before doing any memory transfers at all.
|
|
|
|
- build a json parser and builder...
|
|
There are several solutions out there but I think it will be best
|
|
to use my own one that nicely fits into the rest of this code.
|
|
|
|
A json builder and parser are different things with different effords.
|
|
- parser: a state machine that might either fill a data structur in a
|
|
given way or calls callbacks with the relevant informations.
|
|
|
|
I just tought about is and have decided to create a json class
|
|
which consists only of a void pointer that is the root element
|
|
of the json data and an information about what type it is.
|
|
This will be a JsonValue. A json value always contains the type
|
|
of the value as well as a representation of the value itself.
|
|
The might be string, number(which also includes true and false),
|
|
hash or array. A value of type hash will be indexed by strings and
|
|
might hold arbitrary JsonValues as its values. The array is a NULL
|
|
terminated array of JsonValues.
|
|
|
|
OK, that has the drawback that I need to organize the data structures
|
|
of my other classes to be json capable.
|
|
|
|
Another idea. I create an interface "initjson" or something like this.
|
|
This interface must define the methods to set given data by json
|
|
values. A list of these might be:
|
|
- start_hash
|
|
- start_array
|
|
- end_hash
|
|
- end_array
|
|
- key
|
|
- value
|
|
|
|
- a set of helper methods that serialize given data into a json string
|
|
representation.
|
|
|
|
- Let a user being able to create tasks and get them again after login.
|
|
|
|
- Implement roles and role based access model.
|
|
|
|
- Create management for roles.
|
|
|
|
- Give a user the ability to open tasks to other users / roles....
|
|
|
|
- right now i will use long polling ajax calls when feedback from to the client
|
|
is needed. In the long term this should be changed to websockets (ws). But
|
|
right now ws specification is not final anyway. :) (optional)
|
|
|
|
- IPV6 support (optional)
|
|
|
|
- optimize session handling. Right now session handling slows down
|
|
everything incredibly...especially if there would be some clients
|
|
that don't use cookies...
|
|
The reason is that for every request all sessions are checked if they
|
|
could be closed. If the session amount increases this slows down everything.
|
|
TODO:
|
|
* add an additional storage layer, an ordered list indexed by lifetime
|
|
each element of this list is an array of session ids that have this
|
|
lifetime. This would have been a classical queue, if there wasn't the
|
|
need to update them in between.
|
|
So three cases:
|
|
* delete: the element is on the beginning of the list.
|
|
* insert: always put to the end of the list.
|
|
* update: the lifetime could be get in O(log n) from the
|
|
session hash. If the list is no list but an array
|
|
I could get to the element in O(log n) via successive
|
|
approximation. But this would involve a memmov afterwords.
|
|
Additionally it will make memory menagement more difficult.
|
|
We need a large enough array that may grow in time.
|
|
With the optimizes memory management this leaves us with
|
|
large memory segments that might be never used...
|
|
so this would involve splitting again.
|
|
So a better alternative might be again a tree...
|
|
Workflow to handle sessions updates would be:
|
|
* lookup session in session hash O(log n)
|
|
* remove session from session time tree O(log n)
|
|
* if session it timeout
|
|
* remove session from session hash O(log n)
|
|
* else
|
|
* update timeout O(1)
|
|
* insert session again in session timeout tree O(log n)
|
|
So I end up with a complexity of 3*O(log n) not to bad.
|
|
But each lifetime might hold sevaral session ids, so
|
|
with an update this again has to be looked up.
|
|
Anyway it will be faster than walking through a maybe very
|
|
large monster list of sessions.
|
|
Also one should keep in mind that the timeout list has
|
|
a maximum of timeout second entries.
|
|
* store the lowest lifetime (ideally it could be retrieved by the key of
|
|
the first element of the previous list.
|
|
* try to delete sessions only if the lowest lifetime is expired.
|
|
* store the sessions in a hash indexed by id again.
|
|
|
|
FOR VERSION 1.0
|
|
|
|
- support for multiple worker processes. (optional)
|
|
- I need a distributed storage system and a way to distribute
|
|
sessions to be able to scale horizontally.
|
|
I first thought about using couchdb, but I think I will try
|
|
to implement a network layer over gdbm.
|
|
This will result in a small daemon handling connections to
|
|
various gdbm files and a bunch of connected clients.
|
|
Over one client connection several gdbm database files
|
|
might be accessed. Each request might send the file handle
|
|
to the database file which it first gets via ngdbm_open.
|
|
In fact I think I will create a one to one mapping of
|
|
gdbm commands to network requests and the client lib will
|
|
provide each gdbm function just with a 'n' prefix.
|
|
All this might result in either synchronization or performance
|
|
problems but at least I will give it a try.
|