Getting Processing into the stdlib

by jesse in ,


I shot an email out to Python-Dev earlier this week asking for comments/questions regarding my push to get the Processing into the standard library. There's been some decent discussion about target releases and other meta-issues around getting it in. Right now, it looks like I am going to try to target 2.7 and 3.1 - this makes sense for a few reasons.

  • First, the PEP deadline was uh, a year ago for 2.6 and 3.0
  • There's some cleanup on the module which needs to be done
  • There might be some renaming requirements
  • Need to talk to R. about a 1.0 release
  • Need to chunk out some time to convert the tests to unit test format.

That all being said - it doesn't look unfeasible to accomplish - and the response both on list and to me privately has been 95% +1 and 5% -.5 and -1 - the positive response really does make me feel that this is the right approach to take.

I am currently working on revised benchmarks for processing vs. threads vs. pp vs. other right now - I'll be publishing those as soon as I complete them to both here and the mailing list discussion as a counterpoint to some of the open questions.

I'd like to see if any of you, oh internet people, have anything else you'd like to have answered for this or anything you'd like to add to the discussion.

Note, I am not trying to solve the "distributed" problem with the inclusion of this - the remote capabilities of the processing module are a side-benefit - not the primary benefit to trying to get this in. I am taking some of the distributed stuff mentally into account - but the goal is to scratch one specific itch - not to solve everyones problem with a single addition.

Now all I have to get over is some bizarre errors with parallel python ramming into ulimit, uh, limits. Luckily I have everything from a dual core to an eight core to hack on!