Should we parse everything into a structure as it comes off the wire before it is actually handled? It's kind of nice to have everything cleanly in memory where we can look at it while debugging, but really it is quite unnecessary, for several reasons: * There's no need to copy data into a structure, and then back out again when we want to use it. * Almost every member of the structure is only accessed again once. * Because of the way NFS maps onto RPC, the structures are not a particularly natural setup anyhow. * Some fields may not be accessed at all, and so it's wasteful to copy them out. * For header errors, we're in a better position to produce the correct error message if we know exactly where we were up to when the error occurred. ---- Eventually we could actually pay less attention to the record-marking header, and instead parse the messages as it arrives. This would perhaps allow more pipelining. This is only much use if there are multiple long commands in the requests, and they're not all going to arrive at the same time. I don't think the additional complexity would be justified. ---- Hybrid select/fork model: Both forking and select/poll run into performance problems at very high loads. Forking uses up too much memory and makes the run queue long. poll, on the other hand, involves scanning long lists of file descriptors, and takes away from disk concurrency. Therefore we might have a pool of /n/ daemons, each of which has /m/ open file descriptors. If all the network IO stuff is kept in the RM layer, then we might be able to cleanly to this in the future.