This is an automated email from the git hooks/post-receive script. It was generated because a ref change was pushed to the repository containing the project "IPFire 3.x development tree".
The branch, master has been updated via 9ee8f32b4cef43b5764c60066bfca4e7705e1161 (commit) via 378de11bb64d85237aa8faabea0c58c01f2a8f5f (commit) via 2d0455c301739575164daddbefd7d6c77b47fa25 (commit) via 35cb44360f50d3b6fe83cb6f6c378f650c1fc8dc (commit) via 69cdd37506d41b4ce92e32277c09c864ff20402b (commit) via 04a0dc8b21f5479cab0950370337c211e35306a6 (commit) via b753064e932dbad4bde90492485712f9e6c1c2c2 (commit) via 73c076bd0bd0389b24db692a539643f11feff082 (commit) via 4b04b034d66549b89273a6f34bb233ee07ea5072 (commit) via 7c80d5f3f442f32431936da9722154fd729d617d (commit) via 8973917590f8dd95d0a1fae4788c14a0afd0a107 (commit) via d98c198295c645fcc84a41a4e2a399cf63d2cca6 (commit) via 829d46651840a38b9d60347c5813fb300602ac8f (commit) via 7f363f1f514341727b2d32d74e304931fba76363 (commit) via 96b670061cb300d8a3c52487297c952622b779aa (commit) via 7ad0adbfbd3796d24c7be562fa270e804b95549c (commit) via 9f97fd8d4e6d8b8ab37cb9e8cb99ba0a0cd90a63 (commit) via a68283df289dcf07245ba66e902b36ae8717e024 (commit) via be7fb80e2a769b623bb7bfa962ef6f8e0ad5d0fc (commit) from 4f5ccb9d8e8c6645169da4d47d8c6042fc0cdfd0 (commit)
Those revisions listed above that are new to this repository have not appeared on any other notification email; so we list those revisions in full, below.
- Log ----------------------------------------------------------------- commit 9ee8f32b4cef43b5764c60066bfca4e7705e1161 Merge: 378de11bb64d85237aa8faabea0c58c01f2a8f5f d98c198295c645fcc84a41a4e2a399cf63d2cca6 Author: Michael Tremer michael.tremer@ipfire.org Date: Tue Mar 23 14:57:35 2010 +0100
Merge remote branch 'ms/master'
commit 378de11bb64d85237aa8faabea0c58c01f2a8f5f Merge: 2d0455c301739575164daddbefd7d6c77b47fa25 829d46651840a38b9d60347c5813fb300602ac8f Author: Michael Tremer michael.tremer@ipfire.org Date: Tue Mar 23 14:51:25 2010 +0100
Merge remote branch 'stevee/clean'
commit 2d0455c301739575164daddbefd7d6c77b47fa25 Author: Michael Tremer michael.tremer@ipfire.org Date: Tue Mar 23 14:49:58 2010 +0100
naoki: Capture time that a build takes.
commit 35cb44360f50d3b6fe83cb6f6c378f650c1fc8dc Author: Michael Tremer michael.tremer@ipfire.org Date: Tue Mar 23 14:04:55 2010 +0100
QA: Extend checks for unwanted dirs.
We do not want /usr/etc and /usr/local.
commit 69cdd37506d41b4ce92e32277c09c864ff20402b Author: Michael Tremer michael.tremer@ipfire.org Date: Tue Mar 23 14:03:08 2010 +0100
QA: Check dependency libs only if interpreter is available.
commit 04a0dc8b21f5479cab0950370337c211e35306a6 Author: Michael Tremer michael.tremer@ipfire.org Date: Tue Mar 23 13:48:57 2010 +0100
QA: Check for non-resolveable symlinks.
commit b753064e932dbad4bde90492485712f9e6c1c2c2 Author: Michael Tremer michael.tremer@ipfire.org Date: Tue Mar 23 13:03:54 2010 +0100
naoki: Add DOCDIR variable to constants.py.
commit 73c076bd0bd0389b24db692a539643f11feff082 Author: Michael Tremer michael.tremer@ipfire.org Date: Tue Mar 23 13:03:26 2010 +0100
naoki: Add help to make.sh command.
commit 4b04b034d66549b89273a6f34bb233ee07ea5072 Author: Michael Tremer michael.tremer@ipfire.org Date: Tue Mar 23 11:30:30 2010 +0100
naoki: Add some information about what packages we are going to build.
commit 7c80d5f3f442f32431936da9722154fd729d617d Author: Michael Tremer michael.tremer@ipfire.org Date: Tue Mar 23 11:15:15 2010 +0100
naoki: Speedup package list output.
commit 8973917590f8dd95d0a1fae4788c14a0afd0a107 Author: Michael Tremer michael.tremer@ipfire.org Date: Tue Mar 23 11:13:55 2010 +0100
naoki: Add command line options to list built or unbuilt packages only.
commit d98c198295c645fcc84a41a4e2a399cf63d2cca6 Merge: 4f5ccb9d8e8c6645169da4d47d8c6042fc0cdfd0 be7fb80e2a769b623bb7bfa962ef6f8e0ad5d0fc Author: Michael Tremer michael.tremer@ipfire.org Date: Mon Mar 22 23:12:11 2010 +0100
Merge branch 'updates'
commit 829d46651840a38b9d60347c5813fb300602ac8f Author: Schantl Stefan Stevee@ipfire.org Date: Mon Mar 22 23:01:40 2010 +0100
ebtables: Clean up naoki-makefile.
commit 7f363f1f514341727b2d32d74e304931fba76363 Author: Schantl Stefan Stevee@ipfire.org Date: Mon Mar 22 22:21:42 2010 +0100
btrfs-progs: Clean naoki-makefile.
commit 96b670061cb300d8a3c52487297c952622b779aa Merge: 7ad0adbfbd3796d24c7be562fa270e804b95549c fca3d1ded31da44810b113b65104c6cda8c9695d Author: Schantl Stefan Stevee@ipfire.org Date: Mon Mar 22 21:41:40 2010 +0100
Merge branch 'master' of ssh://git.ipfire.org/pub/git/ipfire-3.x into clean
commit 7ad0adbfbd3796d24c7be562fa270e804b95549c Author: Schantl Stefan Stevee@ipfire.org Date: Mon Mar 22 15:55:53 2010 +0100
dosfstools: Clean up naoki-makefile.
commit 9f97fd8d4e6d8b8ab37cb9e8cb99ba0a0cd90a63 Author: Schantl Stefan Stevee@ipfire.org Date: Mon Mar 22 15:51:24 2010 +0100
dmraid: Clean up naoki-makefile.
commit a68283df289dcf07245ba66e902b36ae8717e024 Author: Schantl Stefan Stevee@ipfire.org Date: Mon Mar 22 15:36:37 2010 +0100
dhcp: Clean up naoki-makefile.
commit be7fb80e2a769b623bb7bfa962ef6f8e0ad5d0fc Author: Michael Tremer michael.tremer@ipfire.org Date: Mon Mar 22 13:48:10 2010 +0100
python: Update to 2.6.5.
-----------------------------------------------------------------------
Summary of changes: doc/make.sh-usage | 46 - naoki/__init__.py | 13 +- naoki/chroot.py | 7 + naoki/constants.py | 1 + naoki/terminal.py | 109 +- pkgs/core/btrfs-progs/btrfs-progs.nm | 4 - pkgs/core/dhcp/dhcp.nm | 3 +- pkgs/core/dmraid/dmraid.nm | 8 +- pkgs/core/dosfstools/dosfstools.nm | 4 - pkgs/core/ebtables/ebtables.nm | 4 - .../patches/python-2.6-update-bsddb3-4.8.patch | 3446 -------------------- pkgs/core/python/python.nm | 2 +- tools/quality-agent.d/002-bad-symlinks | 4 + tools/quality-agent.d/050-root-links-to-usr | 7 + tools/quality-agent.d/095-directory-layout | 2 + tools/quality-agent.d/qa-include | 7 + 16 files changed, 140 insertions(+), 3527 deletions(-) delete mode 100644 doc/make.sh-usage delete mode 100644 pkgs/core/python/patches/python-2.6-update-bsddb3-4.8.patch
Difference in files: diff --git a/doc/make.sh-usage b/doc/make.sh-usage deleted file mode 100644 index 76e74d2..0000000 --- a/doc/make.sh-usage +++ /dev/null @@ -1,46 +0,0 @@ -Standard use commands in the order you may need them. - build : compile the distribution - clean : erase build and log to recompile everything from scratch - -Maintainer / advanced commands - toolchain : (Without argument) Create our own toolchain package. - |`- get : Download from source.ipfire.org. - `-- put : Upload to source.ipfire.org. - - source : _Source tarballs and patches_ - |`- get : Download from source.ipfire.org. - |-- put : Upload to source.ipfire.org. - `-- list : Show all used packages (wiki output). - - target : _Images and packages_ - `- put : Upload everything to the public ftp server. - - shell : Enter a shell inside the chroot, used to tune lfs script - and / or during kernel upgrade to rebuild a new .config. - - git : _Git functions_ - |`- pull : Loads the latest source files from repository. - |`- push : Pushes the local commits to the repository. - |`- log : Updates doc/ChangeLog. - |`- diff : Mainly produce a diff from previous version to track wich - | files have been changed. - |`- ci : Applies your changes to the repository. - `-- export : Tar.bz2 the source code from revision. - - check : _Run sanity checks_ - | On every option you can use --fix - | to fix some errors automatically. - |`- cpu <flag> : Check cpu for flag. - |`- roofiles [--fix] : Check rootfiles for error. - |`- sanity [--fix] : Check full code for errors. - `-- targets : Returns the target, that the host can build. - - vm : _Run a virtual machine (which qemu)_ - |`- boot [cdrom|disk] : Boot machine from cdrom or disk. (Default: cdrom) - `-- clean : Cleanup the virtual environment. - - pull : A macro to pull the latest changes from git and load the - source tarballs and patches. Additionally it checks the code - for error. - push : Similar to "pull", but uploads the source tarballs and pushes - all changes to git. diff --git a/naoki/__init__.py b/naoki/__init__.py index d49039a..ae29859 100644 --- a/naoki/__init__.py +++ b/naoki/__init__.py @@ -100,6 +100,9 @@ class Naoki(object): % [dep.name for dep in package.dependencies_unbuilt]) return
+ self.log.info("Going on to build %d packages in order: %s" \ + % (len(packages), [package.name for package in packages])) + for package in packages: package.download()
@@ -177,10 +180,18 @@ Release : %(release)s
def call_package_list(self, args): for package in backend.parse_package_info(backend.get_package_names()): + # Skip unbuilt packages if we want built packages + if args.built and not package.built: + continue + + # Skip built packages if we want unbuilt only + if args.unbuilt and package.built: + continue + if args.long: print package.fmtstr("%(name)-32s | %(version)-15s | %(summary)s") else: - print package.fmtstr("%(name)s") + print package.name
def call_package_tree(self, args): print backend.deptree(backend.parse_package(backend.get_package_names())) diff --git a/naoki/chroot.py b/naoki/chroot.py index 73c29da..e6b5fdc 100644 --- a/naoki/chroot.py +++ b/naoki/chroot.py @@ -5,6 +5,7 @@ import logging import os import random import stat +import time
import backend import util @@ -258,12 +259,18 @@ class Environment(object): def build(self): self.package.download()
+ # Save start time + time_start = time.time() + try: self.make("package") except Error: if config["cleanup_on_failure"]: self.clean() raise + finally: + time_end = time.time() + self.log.debug("Package build took %.2fs" % (time_end - time_start))
if config["cleanup_on_success"]: self.clean() diff --git a/naoki/constants.py b/naoki/constants.py index a0dfac7..c8de3ae 100644 --- a/naoki/constants.py +++ b/naoki/constants.py @@ -10,6 +10,7 @@ BUILDDIR = os.path.join(BASEDIR, "build") CACHEDIR = os.path.join(BASEDIR, "cache") CCACHEDIR = os.path.join(BASEDIR, "ccache") CONFIGDIR = os.path.join(BASEDIR, "config") +DOCDIR = os.path.join(BASEDIR, "doc") LOGDIR = os.path.join(BASEDIR, "logs") PKGSDIR = os.path.join(BASEDIR, "pkgs") PACKAGESDIR = os.path.join(BUILDDIR, "packages") diff --git a/naoki/terminal.py b/naoki/terminal.py index f5667f5..0e73d5a 100644 --- a/naoki/terminal.py +++ b/naoki/terminal.py @@ -21,11 +21,13 @@ class NameSpace(dict):
class Parser(object): - def __init__(self, name, arguments=[], parsers=[]): + def __init__(self, name, arguments=[], parsers=[], **kwargs): self.name = name self.arguments = arguments self.parsers = parsers
+ self._help = kwargs.get("help", "No help available") + self.subparser = None
def __repr__(self): @@ -70,24 +72,67 @@ class Parser(object):
return ret
+ @property + def help_line(self): + ret = "" + for argument in self.arguments: + ret += " %s" % argument.help_line + + if self.parsers: + ret += " <command ...>" + + return ret + + def help(self, indent=0): + ret = self.name + + help_line = self.help_line + if help_line: + ret += self.help_line + "\n" + if self._help: + ret += "\n " + self._help + "\n" + else: + ret += " - " + self._help + "\n" + + if self.arguments: + ret += "\n" + for argument in sorted(self.arguments): + ret += " %15s | %s\n" % (", ".join(argument.args), argument.help) + ret += "\n" + + if self.parsers: + ret += "\n" + for parser in self.parsers: + ret += parser.help(indent=indent + 4) + + indent_string = " " * 4 + return indent_string.join(["%s\n" % line for line in ret.split("\n")]) +
class _Argument(object): DEFAULT_HELP = "No help available"
def __init__(self, name, args, **kwargs): self.name = name - self.args = args + self.args = sorted(args, reverse=True, key=str.__len__) self.help = kwargs.get("help", self.DEFAULT_HELP)
self._parsed = False self._parsed_args = []
+ def __cmp__(self, other): + return cmp(self.name, other.name) + def parse(self, args): raise NotImplementedError
def value(self): raise NotImplementedError
+ @property + def help_line(self): + raise NotImplementedError +
class Option(_Argument): def parse(self, args): @@ -109,12 +154,19 @@ class Option(_Argument):
return False
+ @property + def help_line(self): + return "[%s]" % ", ".join(self.args) +
class Choice(_Argument): def __init__(self, *args, **kwargs): _Argument.__init__(self, *args, **kwargs)
self.choices = kwargs.get("choices", []) + self.choices.sort() + + self.help += " [%s]" % ", ".join(self.choices)
def parse(self, args): self._parsed = True @@ -144,6 +196,10 @@ class Choice(_Argument):
return None
+ @property + def help_line(self): + return "[%s %s]" % (", ".join(self.args), self.name.upper()) +
class List(_Argument): def __init__(self, name, **kwargs): @@ -158,6 +214,11 @@ class List(_Argument): def value(self): return self._parsed_args
+ @property + def help_line(self): + name = self.name[:-1] + " " + return "[" + name * 2 + "...]" +
class Commandline(object): def __init__(self, naoki): @@ -180,7 +241,8 @@ class Commandline(object): arches.set(args.arch)
def __parse(self): - parser = Parser("root", + parser = Parser(sys.argv[0], + help="Global help", arguments=[ Option("help", ["-h", "--help"], help="Show help text"), Option("quiet", ["-q", "--quiet"], help="Set quiet mode"), @@ -191,66 +253,79 @@ class Commandline(object): parsers=[ # Build Parser("build", + help="Primary build command", arguments=[ - List("packages"), + List("packages", help="Give a list of packages to build or say 'all'"), ]),
# Toolchain Parser("toolchain", parsers=[ - Parser("download"), - Parser("build"), - Parser("tree"), + Parser("download", help="Download a toolchain"), + Parser("build", help="Build the toolchain"), + Parser("tree", help="Show package tree of toolchain"), ]),
# Package Parser("package", parsers=[ Parser("info", + help="Show detailed information about given packages", arguments=[ - Option("long", ["-l", "--long"]), - Option("machine", ["--machine"]), - Option("wiki", ["--wiki"]), + Option("long", ["-l", "--long"], help="Show long list of information"), + Option("machine", ["--machine"], help="Output in machine parseable format"), + Option("wiki", ["--wiki"], help="Output in wiki format"), List("packages"), ]), - Parser("tree"), + Parser("tree", help="Show package tree"), Parser("list", + help="Show package list", arguments=[ - Option("long", ["-l", "--long"]), + Option("long", ["-l", "--long"], help="Show list with lots of information"), + Option("unbuilt", ["-u", "--unbuilt"], help="Do only show unbuilt packages"), + Option("built", ["-b", "--built"], help="Do only show already built packages"), ]), Parser("groups", + help="Show package groups", arguments=[ - Option("wiki", ["--wiki"]), + Option("wiki", ["--wiki"], help="Output in wiki format"), ]), ]),
# Source Parser("source", + help="Handle source tarballs", parsers=[ Parser("download", + help="Download source tarballs", arguments=[ List("packages"), ]), Parser("upload", + help="Upload source tarballs", arguments=[ List("packages"), ]), - Parser("clean"), + Parser("clean", help="Cleanup unused tarballs"), ]),
# Check Parser("check", + help="Check commands", parsers=[ - Parser("host"), + Parser("host", help="Check if host fullfills prerequisites"), ]),
# Batch Parser("batch", + help="Batch command - use with caution", parsers=[ - Parser("cron"), + Parser("cron", help="Command that gets called by cron"), ]), ])
+ self.parser = parser + args = parser.parse(sys.argv[1:])
if args: @@ -259,7 +334,7 @@ class Commandline(object): return parser.values
def help(self): - print "PRINTING HELP TEXT" + print >>sys.stderr, self.parser.help(),
DEFAULT_COLUMNS = 80 diff --git a/pkgs/core/btrfs-progs/btrfs-progs.nm b/pkgs/core/btrfs-progs/btrfs-progs.nm index cb3812d..eacd751 100644 --- a/pkgs/core/btrfs-progs/btrfs-progs.nm +++ b/pkgs/core/btrfs-progs/btrfs-progs.nm @@ -43,10 +43,6 @@ endef
PKG_TARBALL = $(THISAPP).tar.bz2
-############################################################################### -# Installation Details -############################################################################### - define STAGE_BUILD cd $(DIR_APP) && make CFLAGS="$(CFLAGS)" $(PARALLELISMFLAGS) endef diff --git a/pkgs/core/dhcp/dhcp.nm b/pkgs/core/dhcp/dhcp.nm index 0c85735..008c9ad 100644 --- a/pkgs/core/dhcp/dhcp.nm +++ b/pkgs/core/dhcp/dhcp.nm @@ -44,7 +44,8 @@ endef
PKG_TARBALL = $(THISAPP).tar.gz
-CONFIGURE_OPTIONS += --sysconfdir=/etc \ +CONFIGURE_OPTIONS += \ + --sysconfdir=/etc \ --with-srv-lease-file=/var/lib/dhcpd/dhcpd.leases \ --with-cli-lease-file=/var/lib/dhclient/dhclient.leases \ --with-srv-pid-file=/var/run/dhcpd.pid \ diff --git a/pkgs/core/dmraid/dmraid.nm b/pkgs/core/dmraid/dmraid.nm index 16df42d..8a85d64 100644 --- a/pkgs/core/dmraid/dmraid.nm +++ b/pkgs/core/dmraid/dmraid.nm @@ -45,7 +45,10 @@ endef PKG_TARBALL = $(THISAPP).tar.bz2
DIR_APP = $(DIR_SRC)/$(PKG_NAME)/$(PKG_VER) -PARALLELISMFLAGS = + +PARALLELISMFLAGS = # Disabled + +STAGE_INSTALL_TARGETS += sbindir=$(BUILDROOT)/sbin
CONFIGURE_OPTIONS += \ --sbindir=/sbin \ @@ -57,8 +60,7 @@ define STAGE_BUILD_CMDS cd $(DIR_APP) && make -C lib libdmraid.so endef
-define STAGE_INSTALL - cd $(DIR_APP) && make install DESTDIR=$(BUILDROOT) sbindir=$(BUILDROOT)/sbin +define STAGE_INSTALL_CMDS -mkdir -pv $(BUILDROOT)/{,usr}/lib cd $(DIR_APP) && install -v -m 755 lib/libdmraid.so \ $(BUILDROOT)/lib/libdmraid.so.$(PKG_VER) diff --git a/pkgs/core/dosfstools/dosfstools.nm b/pkgs/core/dosfstools/dosfstools.nm index c855c59..7b6d980 100644 --- a/pkgs/core/dosfstools/dosfstools.nm +++ b/pkgs/core/dosfstools/dosfstools.nm @@ -42,10 +42,6 @@ endef
PKG_TARBALL = $(THISAPP).tar.bz2
-############################################################################### -# Installation Details -############################################################################### - define STAGE_BUILD cd $(DIR_APP) && make $(PARALLELISMFLAGS) endef diff --git a/pkgs/core/ebtables/ebtables.nm b/pkgs/core/ebtables/ebtables.nm index 4b889b2..5eb314e 100644 --- a/pkgs/core/ebtables/ebtables.nm +++ b/pkgs/core/ebtables/ebtables.nm @@ -49,10 +49,6 @@ define QUALITY_AGENT_WHITELIST_RPATH /sbin/ebtables endef
-############################################################################### -# Installation Details -############################################################################### - define STAGE_BUILD cd $(DIR_APP) && make CFLAGS="$(CFLAGS)" BINDIR="/sbin" \ LIBDIR="/lib/ebtables" MANDIR="/usr/share/man" $(PARALLELISMFLAGS) diff --git a/pkgs/core/python/patches/python-2.6-update-bsddb3-4.8.patch b/pkgs/core/python/patches/python-2.6-update-bsddb3-4.8.patch deleted file mode 100644 index 4f25058..0000000 --- a/pkgs/core/python/patches/python-2.6-update-bsddb3-4.8.patch +++ /dev/null @@ -1,3446 +0,0 @@ -diff -Nupr Python-2.6.4.orig/Lib/bsddb/dbobj.py Python-2.6.4/Lib/bsddb/dbobj.py ---- Python-2.6.4.orig/Lib/bsddb/dbobj.py 2008-07-23 07:38:42.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/dbobj.py 2009-12-04 07:36:00.000000000 -0500 -@@ -110,15 +110,17 @@ class DBEnv: - def log_stat(self, *args, **kwargs): - return apply(self._cobj.log_stat, args, kwargs) - -- if db.version() >= (4,1): -- def dbremove(self, *args, **kwargs): -- return apply(self._cobj.dbremove, args, kwargs) -- def dbrename(self, *args, **kwargs): -- return apply(self._cobj.dbrename, args, kwargs) -- def set_encrypt(self, *args, **kwargs): -- return apply(self._cobj.set_encrypt, args, kwargs) -+ def dbremove(self, *args, **kwargs): -+ return apply(self._cobj.dbremove, args, kwargs) -+ def dbrename(self, *args, **kwargs): -+ return apply(self._cobj.dbrename, args, kwargs) -+ def set_encrypt(self, *args, **kwargs): -+ return apply(self._cobj.set_encrypt, args, kwargs) - - if db.version() >= (4,4): -+ def fileid_reset(self, *args, **kwargs): -+ return self._cobj.fileid_reset(*args, **kwargs) -+ - def lsn_reset(self, *args, **kwargs): - return apply(self._cobj.lsn_reset, args, kwargs) - -@@ -229,9 +231,8 @@ class DB(MutableMapping): - def set_get_returns_none(self, *args, **kwargs): - return apply(self._cobj.set_get_returns_none, args, kwargs) - -- if db.version() >= (4,1): -- def set_encrypt(self, *args, **kwargs): -- return apply(self._cobj.set_encrypt, args, kwargs) -+ def set_encrypt(self, *args, **kwargs): -+ return apply(self._cobj.set_encrypt, args, kwargs) - - - class DBSequence: -diff -Nupr Python-2.6.4.orig/Lib/bsddb/dbtables.py Python-2.6.4/Lib/bsddb/dbtables.py ---- Python-2.6.4.orig/Lib/bsddb/dbtables.py 2008-08-31 10:00:51.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/dbtables.py 2009-12-04 07:36:00.000000000 -0500 -@@ -15,7 +15,7 @@ - # This provides a simple database table interface built on top of - # the Python Berkeley DB 3 interface. - # --_cvsid = '$Id: dbtables.py 66088 2008-08-31 14:00:51Z jesus.cea $' -+_cvsid = '$Id: dbtables.py 58758 2007-11-01 21:15:36Z gregory.p.smith $' - - import re - import sys -@@ -659,6 +659,13 @@ class bsdTableDB : - a = atuple[1] - b = btuple[1] - if type(a) is type(b): -+ -+ # Needed for python 3. "cmp" vanished in 3.0.1 -+ def cmp(a, b) : -+ if a==b : return 0 -+ if a<b : return -1 -+ return 1 -+ - if isinstance(a, PrefixCond) and isinstance(b, PrefixCond): - # longest prefix first - return cmp(len(b.prefix), len(a.prefix)) -diff -Nupr Python-2.6.4.orig/Lib/bsddb/dbutils.py Python-2.6.4/Lib/bsddb/dbutils.py ---- Python-2.6.4.orig/Lib/bsddb/dbutils.py 2008-08-31 10:00:51.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/dbutils.py 2009-12-04 07:36:00.000000000 -0500 -@@ -61,7 +61,7 @@ def DeadlockWrap(function, *_args, **_kw - """ - sleeptime = _deadlock_MinSleepTime - max_retries = _kwargs.get('max_retries', -1) -- if _kwargs.has_key('max_retries'): -+ if 'max_retries' in _kwargs: - del _kwargs['max_retries'] - while True: - try: -diff -Nupr Python-2.6.4.orig/Lib/bsddb/__init__.py Python-2.6.4/Lib/bsddb/__init__.py ---- Python-2.6.4.orig/Lib/bsddb/__init__.py 2008-09-05 14:33:51.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/__init__.py 2009-12-04 07:36:00.000000000 -0500 -@@ -33,7 +33,7 @@ - #---------------------------------------------------------------------- - - --"""Support for Berkeley DB 4.0 through 4.7 with a simple interface. -+"""Support for Berkeley DB 4.1 through 4.8 with a simple interface. - - For the full featured object oriented interface use the bsddb.db module - instead. It mirrors the Oracle Berkeley DB C API. -@@ -42,12 +42,6 @@ instead. It mirrors the Oracle Berkeley - import sys - absolute_import = (sys.version_info[0] >= 3) - --if sys.py3kwarning: -- import warnings -- warnings.warnpy3k("in 3.x, bsddb has been removed; " -- "please use the pybsddb project instead", -- DeprecationWarning, 2) -- - try: - if __name__ == 'bsddb3': - # import _pybsddb binary as it should be the more recent version from -@@ -442,8 +436,10 @@ def _checkflag(flag, file): - # Berkeley DB was too. - - try: -- import thread -- del thread -+ # 2to3 automatically changes "import thread" to "import _thread" -+ import thread as T -+ del T -+ - except ImportError: - db.DB_THREAD = 0 - -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test/test_all.py Python-2.6.4/Lib/bsddb/test/test_all.py ---- Python-2.6.4.orig/Lib/bsddb/test/test_all.py 2008-09-03 18:07:11.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/test/test_all.py 2009-12-04 07:36:00.000000000 -0500 -@@ -203,6 +203,16 @@ if sys.version_info[0] >= 3 : - k = bytes(k, charset) - return self._db.has_key(k, txn=txn) - -+ def set_re_delim(self, c) : -+ if isinstance(c, str) : # We can use a numeric value byte too -+ c = bytes(c, charset) -+ return self._db.set_re_delim(c) -+ -+ def set_re_pad(self, c) : -+ if isinstance(c, str) : # We can use a numeric value byte too -+ c = bytes(c, charset) -+ return self._db.set_re_pad(c) -+ - def put(self, key, value, txn=None, flags=0, dlen=-1, doff=-1) : - if isinstance(key, str) : - key = bytes(key, charset) -@@ -221,6 +231,11 @@ if sys.version_info[0] >= 3 : - key = bytes(key, charset) - return self._db.get_size(key) - -+ def exists(self, key, *args, **kwargs) : -+ if isinstance(key, str) : -+ key = bytes(key, charset) -+ return self._db.exists(key, *args, **kwargs) -+ - def get(self, key, default="MagicCookie", txn=None, flags=0, dlen=-1, doff=-1) : - if isinstance(key, str) : - key = bytes(key, charset) -@@ -288,13 +303,21 @@ if sys.version_info[0] >= 3 : - key = key.decode(charset) - data = data.decode(charset) - key = self._callback(key, data) -- if (key != bsddb._db.DB_DONOTINDEX) and isinstance(key, -- str) : -- key = bytes(key, charset) -+ if (key != bsddb._db.DB_DONOTINDEX) : -+ if isinstance(key, str) : -+ key = bytes(key, charset) -+ elif isinstance(key, list) : -+ key2 = [] -+ for i in key : -+ if isinstance(i, str) : -+ i = bytes(i, charset) -+ key2.append(i) -+ key = key2 - return key - - return self._db.associate(secondarydb._db, -- associate_callback(callback).callback, flags=flags, txn=txn) -+ associate_callback(callback).callback, flags=flags, -+ txn=txn) - - def cursor(self, txn=None, flags=0) : - return cursor_py3k(self._db, txn=txn, flags=flags) -@@ -310,6 +333,12 @@ if sys.version_info[0] >= 3 : - def __getattr__(self, v) : - return getattr(self._dbenv, v) - -+ def get_data_dirs(self) : -+ # Have to use a list comprehension and not -+ # generators, because we are supporting Python 2.3. -+ return tuple( -+ [i.decode(charset) for i in self._dbenv.get_data_dirs()]) -+ - class DBSequence_py3k(object) : - def __init__(self, db, *args, **kwargs) : - self._db=db -@@ -332,7 +361,10 @@ if sys.version_info[0] >= 3 : - - bsddb._db.DBEnv_orig = bsddb._db.DBEnv - bsddb._db.DB_orig = bsddb._db.DB -- bsddb._db.DBSequence_orig = bsddb._db.DBSequence -+ if bsddb.db.version() <= (4, 3) : -+ bsddb._db.DBSequence_orig = None -+ else : -+ bsddb._db.DBSequence_orig = bsddb._db.DBSequence - - def do_proxy_db_py3k(flag) : - flag2 = do_proxy_db_py3k.flag -@@ -481,6 +513,7 @@ def suite(module_prefix='', timing_check - test_modules = [ - 'test_associate', - 'test_basics', -+ 'test_dbenv', - 'test_compare', - 'test_compat', - 'test_cursor_pget_bug', -@@ -489,6 +522,7 @@ def suite(module_prefix='', timing_check - 'test_dbtables', - 'test_distributed_transactions', - 'test_early_close', -+ 'test_fileid', - 'test_get_none', - 'test_join', - 'test_lock', -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test/test_associate.py Python-2.6.4/Lib/bsddb/test/test_associate.py ---- Python-2.6.4.orig/Lib/bsddb/test/test_associate.py 2008-08-31 10:00:51.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/test/test_associate.py 2009-12-04 07:36:00.000000000 -0500 -@@ -148,12 +148,8 @@ class AssociateTestCase(unittest.TestCas - self.secDB = None - self.primary = db.DB(self.env) - self.primary.set_get_returns_none(2) -- if db.version() >= (4, 1): -- self.primary.open(self.filename, "primary", self.dbtype, -- db.DB_CREATE | db.DB_THREAD | self.dbFlags, txn=txn) -- else: -- self.primary.open(self.filename, "primary", self.dbtype, -- db.DB_CREATE | db.DB_THREAD | self.dbFlags) -+ self.primary.open(self.filename, "primary", self.dbtype, -+ db.DB_CREATE | db.DB_THREAD | self.dbFlags, txn=txn) - - def closeDB(self): - if self.cur: -@@ -169,12 +165,7 @@ class AssociateTestCase(unittest.TestCas - return self.primary - - -- def test01_associateWithDB(self): -- if verbose: -- print '\n', '-=' * 30 -- print "Running %s.test01_associateWithDB..." % \ -- self.__class__.__name__ -- -+ def _associateWithDB(self, getGenre): - self.createDB() - - self.secDB = db.DB(self.env) -@@ -182,19 +173,21 @@ class AssociateTestCase(unittest.TestCas - self.secDB.set_get_returns_none(2) - self.secDB.open(self.filename, "secondary", db.DB_BTREE, - db.DB_CREATE | db.DB_THREAD | self.dbFlags) -- self.getDB().associate(self.secDB, self.getGenre) -+ self.getDB().associate(self.secDB, getGenre) - - self.addDataToDB(self.getDB()) - - self.finish_test(self.secDB) - -- -- def test02_associateAfterDB(self): -+ def test01_associateWithDB(self): - if verbose: - print '\n', '-=' * 30 -- print "Running %s.test02_associateAfterDB..." % \ -+ print "Running %s.test01_associateWithDB..." % \ - self.__class__.__name__ - -+ return self._associateWithDB(self.getGenre) -+ -+ def _associateAfterDB(self, getGenre) : - self.createDB() - self.addDataToDB(self.getDB()) - -@@ -204,10 +197,35 @@ class AssociateTestCase(unittest.TestCas - db.DB_CREATE | db.DB_THREAD | self.dbFlags) - - # adding the DB_CREATE flag will cause it to index existing records -- self.getDB().associate(self.secDB, self.getGenre, db.DB_CREATE) -+ self.getDB().associate(self.secDB, getGenre, db.DB_CREATE) - - self.finish_test(self.secDB) - -+ def test02_associateAfterDB(self): -+ if verbose: -+ print '\n', '-=' * 30 -+ print "Running %s.test02_associateAfterDB..." % \ -+ self.__class__.__name__ -+ -+ return self._associateAfterDB(self.getGenre) -+ -+ if db.version() >= (4, 6): -+ def test03_associateWithDB(self): -+ if verbose: -+ print '\n', '-=' * 30 -+ print "Running %s.test03_associateWithDB..." % \ -+ self.__class__.__name__ -+ -+ return self._associateWithDB(self.getGenreList) -+ -+ def test04_associateAfterDB(self): -+ if verbose: -+ print '\n', '-=' * 30 -+ print "Running %s.test04_associateAfterDB..." % \ -+ self.__class__.__name__ -+ -+ return self._associateAfterDB(self.getGenreList) -+ - - def finish_test(self, secDB, txn=None): - # 'Blues' should not be in the secondary database -@@ -277,6 +295,12 @@ class AssociateTestCase(unittest.TestCas - else: - return genre - -+ def getGenreList(self, priKey, PriData) : -+ v = self.getGenre(priKey, PriData) -+ if type(v) == type("") : -+ v = [v] -+ return v -+ - - #---------------------------------------------------------------------- - -@@ -322,10 +346,7 @@ class AssociateBTreeTxnTestCase(Associat - self.secDB.set_get_returns_none(2) - self.secDB.open(self.filename, "secondary", db.DB_BTREE, - db.DB_CREATE | db.DB_THREAD, txn=txn) -- if db.version() >= (4,1): -- self.getDB().associate(self.secDB, self.getGenre, txn=txn) -- else: -- self.getDB().associate(self.secDB, self.getGenre) -+ self.getDB().associate(self.secDB, self.getGenre, txn=txn) - - self.addDataToDB(self.getDB(), txn=txn) - except: -@@ -426,8 +447,7 @@ def test_suite(): - suite.addTest(unittest.makeSuite(AssociateBTreeTestCase)) - suite.addTest(unittest.makeSuite(AssociateRecnoTestCase)) - -- if db.version() >= (4, 1): -- suite.addTest(unittest.makeSuite(AssociateBTreeTxnTestCase)) -+ suite.addTest(unittest.makeSuite(AssociateBTreeTxnTestCase)) - - suite.addTest(unittest.makeSuite(ShelveAssociateHashTestCase)) - suite.addTest(unittest.makeSuite(ShelveAssociateBTreeTestCase)) -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test/test_basics.py Python-2.6.4/Lib/bsddb/test/test_basics.py ---- Python-2.6.4.orig/Lib/bsddb/test/test_basics.py 2009-07-02 11:37:21.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/test/test_basics.py 2009-12-04 07:36:00.000000000 -0500 -@@ -33,6 +33,7 @@ class VersionTestCase(unittest.TestCase) - - class BasicTestCase(unittest.TestCase): - dbtype = db.DB_UNKNOWN # must be set in derived class -+ cachesize = (0, 1024*1024, 1) - dbopenflags = 0 - dbsetflags = 0 - dbmode = 0660 -@@ -43,6 +44,13 @@ class BasicTestCase(unittest.TestCase): - - _numKeys = 1002 # PRIVATE. NOTE: must be an even value - -+ import sys -+ if sys.version_info[:3] < (2, 4, 0): -+ def assertTrue(self, expr, msg=None): -+ self.failUnless(expr,msg=msg) -+ def assertFalse(self, expr, msg=None): -+ self.failIf(expr,msg=msg) -+ - def setUp(self): - if self.useEnv: - self.homeDir=get_new_environment_path() -@@ -50,7 +58,8 @@ class BasicTestCase(unittest.TestCase): - self.env = db.DBEnv() - self.env.set_lg_max(1024*1024) - self.env.set_tx_max(30) -- self.env.set_tx_timestamp(int(time.time())) -+ self._t = int(time.time()) -+ self.env.set_tx_timestamp(self._t) - self.env.set_flags(self.envsetflags, 1) - self.env.open(self.homeDir, self.envflags | db.DB_CREATE) - self.filename = "test" -@@ -64,6 +73,15 @@ class BasicTestCase(unittest.TestCase): - - # create and open the DB - self.d = db.DB(self.env) -+ if not self.useEnv : -+ if db.version() >= (4, 2) : -+ self.d.set_cachesize(*self.cachesize) -+ cachesize = self.d.get_cachesize() -+ self.assertEqual(cachesize[0], self.cachesize[0]) -+ self.assertEqual(cachesize[2], self.cachesize[2]) -+ # Berkeley DB expands the cache 25% accounting overhead, -+ # if the cache is small. -+ self.assertEqual(125, int(100.0*cachesize[1]/self.cachesize[1])) - self.d.set_flags(self.dbsetflags) - if self.dbname: - self.d.open(self.filename, self.dbname, self.dbtype, -@@ -74,6 +92,10 @@ class BasicTestCase(unittest.TestCase): - dbtype = self.dbtype, - flags = self.dbopenflags|db.DB_CREATE) - -+ if not self.useEnv: -+ self.assertRaises(db.DBInvalidArgError, -+ self.d.set_cachesize, *self.cachesize) -+ - self.populateDB() - - -@@ -131,7 +153,7 @@ class BasicTestCase(unittest.TestCase): - - self.assertEqual(d.get('0321'), '0321-0321-0321-0321-0321') - -- # By default non-existent keys return None... -+ # By default non-existant keys return None... - self.assertEqual(d.get('abcd'), None) - - # ...but they raise exceptions in other situations. Call -@@ -276,6 +298,21 @@ class BasicTestCase(unittest.TestCase): - pprint(values[:10]) - - -+ #---------------------------------------- -+ -+ def test02b_SequenceMethods(self): -+ d = self.d -+ -+ for key in ['0002', '0101', '0401', '0701', '0998']: -+ data = d[key] -+ self.assertEqual(data, self.makeData(key)) -+ if verbose: -+ print data -+ -+ self.assertTrue(hasattr(d, "__contains__")) -+ self.assertTrue("0401" in d) -+ self.assertFalse("1234" in d) -+ - - #---------------------------------------- - -@@ -509,6 +546,15 @@ class BasicTestCase(unittest.TestCase): - self.assertEqual(old, 1) - self.test03_SimpleCursorStuff(get_raises_error=0, set_raises_error=0) - -+ if db.version() >= (4, 6): -+ def test03d_SimpleCursorPriority(self) : -+ c = self.d.cursor() -+ c.set_priority(db.DB_PRIORITY_VERY_LOW) # Positional -+ self.assertEqual(db.DB_PRIORITY_VERY_LOW, c.get_priority()) -+ c.set_priority(priority=db.DB_PRIORITY_HIGH) # Keyword -+ self.assertEqual(db.DB_PRIORITY_HIGH, c.get_priority()) -+ c.close() -+ - #---------------------------------------- - - def test04_PartialGetAndPut(self): -@@ -562,7 +608,7 @@ class BasicTestCase(unittest.TestCase): - d = self.d - if verbose: - print '\n', '-=' * 30 -- print "Running %s.test99_Truncate..." % self.__class__.__name__ -+ print "Running %s.test06_Truncate..." % self.__class__.__name__ - - d.put("abcde", "ABCDE"); - num = d.truncate() -@@ -582,6 +628,33 @@ class BasicTestCase(unittest.TestCase): - - #---------------------------------------- - -+ if db.version() >= (4, 6): -+ def test08_exists(self) : -+ self.d.put("abcde", "ABCDE") -+ self.assert_(self.d.exists("abcde") == True, -+ "DB->exists() returns wrong value") -+ self.assert_(self.d.exists("x") == False, -+ "DB->exists() returns wrong value") -+ -+ #---------------------------------------- -+ -+ if db.version() >= (4, 7): -+ def test_compact(self) : -+ d = self.d -+ self.assertEqual(0, d.compact(flags=db.DB_FREELIST_ONLY)) -+ self.assertEqual(0, d.compact(flags=db.DB_FREELIST_ONLY)) -+ d.put("abcde", "ABCDE"); -+ d.put("bcde", "BCDE"); -+ d.put("abc", "ABC"); -+ d.put("monty", "python"); -+ d.delete("abc") -+ d.delete("bcde") -+ d.compact(start='abcde', stop='monty', txn=None, -+ compact_fillpercent=42, compact_pages=1, -+ compact_timeout=50000000, -+ flags=db.DB_FREELIST_ONLY|db.DB_FREE_SPACE) -+ -+ #---------------------------------------- - - #---------------------------------------------------------------------- - -@@ -611,13 +684,13 @@ class BasicWithEnvTestCase(BasicTestCase - - #---------------------------------------- - -- def test08_EnvRemoveAndRename(self): -+ def test09_EnvRemoveAndRename(self): - if not self.env: - return - - if verbose: - print '\n', '-=' * 30 -- print "Running %s.test08_EnvRemoveAndRename..." % self.__class__.__name__ -+ print "Running %s.test09_EnvRemoveAndRename..." % self.__class__.__name__ - - # can't rename or remove an open DB - self.d.close() -@@ -626,10 +699,6 @@ class BasicWithEnvTestCase(BasicTestCase - self.env.dbrename(self.filename, None, newname) - self.env.dbremove(newname) - -- # dbremove and dbrename are in 4.1 and later -- if db.version() < (4,1): -- del test08_EnvRemoveAndRename -- - #---------------------------------------- - - class BasicBTreeWithEnvTestCase(BasicWithEnvTestCase): -@@ -729,11 +798,25 @@ class BasicTransactionTestCase(BasicTest - - #---------------------------------------- - -- def test08_TxnTruncate(self): -+ if db.version() >= (4, 6): -+ def test08_exists(self) : -+ txn = self.env.txn_begin() -+ self.d.put("abcde", "ABCDE", txn=txn) -+ txn.commit() -+ txn = self.env.txn_begin() -+ self.assert_(self.d.exists("abcde", txn=txn) == True, -+ "DB->exists() returns wrong value") -+ self.assert_(self.d.exists("x", txn=txn) == False, -+ "DB->exists() returns wrong value") -+ txn.abort() -+ -+ #---------------------------------------- -+ -+ def test09_TxnTruncate(self): - d = self.d - if verbose: - print '\n', '-=' * 30 -- print "Running %s.test08_TxnTruncate..." % self.__class__.__name__ -+ print "Running %s.test09_TxnTruncate..." % self.__class__.__name__ - - d.put("abcde", "ABCDE"); - txn = self.env.txn_begin() -@@ -746,7 +829,7 @@ class BasicTransactionTestCase(BasicTest - - #---------------------------------------- - -- def test09_TxnLateUse(self): -+ def test10_TxnLateUse(self): - txn = self.env.txn_begin() - txn.abort() - try: -@@ -766,6 +849,39 @@ class BasicTransactionTestCase(BasicTest - raise RuntimeError, "DBTxn.commit() called after DB_TXN no longer valid w/o an exception" - - -+ #---------------------------------------- -+ -+ -+ if db.version() >= (4, 4): -+ def test_txn_name(self) : -+ txn=self.env.txn_begin() -+ self.assertEqual(txn.get_name(), "") -+ txn.set_name("XXYY") -+ self.assertEqual(txn.get_name(), "XXYY") -+ txn.set_name("") -+ self.assertEqual(txn.get_name(), "") -+ txn.abort() -+ -+ #---------------------------------------- -+ -+ -+ def test_txn_set_timeout(self) : -+ txn=self.env.txn_begin() -+ txn.set_timeout(1234567, db.DB_SET_LOCK_TIMEOUT) -+ txn.set_timeout(2345678, flags=db.DB_SET_TXN_TIMEOUT) -+ txn.abort() -+ -+ #---------------------------------------- -+ -+ if db.version() >= (4, 2) : -+ def test_get_tx_max(self) : -+ self.assertEqual(self.env.get_tx_max(), 30) -+ -+ def test_get_tx_timestamp(self) : -+ self.assertEqual(self.env.get_tx_timestamp(), self._t) -+ -+ -+ - class BTreeTransactionTestCase(BasicTransactionTestCase): - dbtype = db.DB_BTREE - -@@ -780,11 +896,11 @@ class BTreeRecnoTestCase(BasicTestCase): - dbtype = db.DB_BTREE - dbsetflags = db.DB_RECNUM - -- def test08_RecnoInBTree(self): -+ def test09_RecnoInBTree(self): - d = self.d - if verbose: - print '\n', '-=' * 30 -- print "Running %s.test08_RecnoInBTree..." % self.__class__.__name__ -+ print "Running %s.test09_RecnoInBTree..." % self.__class__.__name__ - - rec = d.get(200) - self.assertEqual(type(rec), type(())) -@@ -814,11 +930,11 @@ class BTreeRecnoWithThreadFlagTestCase(B - class BasicDUPTestCase(BasicTestCase): - dbsetflags = db.DB_DUP - -- def test09_DuplicateKeys(self): -+ def test10_DuplicateKeys(self): - d = self.d - if verbose: - print '\n', '-=' * 30 -- print "Running %s.test09_DuplicateKeys..." % \ -+ print "Running %s.test10_DuplicateKeys..." % \ - self.__class__.__name__ - - d.put("dup0", "before") -@@ -887,11 +1003,11 @@ class BasicMultiDBTestCase(BasicTestCase - else: - return db.DB_BTREE - -- def test10_MultiDB(self): -+ def test11_MultiDB(self): - d1 = self.d - if verbose: - print '\n', '-=' * 30 -- print "Running %s.test10_MultiDB..." % self.__class__.__name__ -+ print "Running %s.test11_MultiDB..." % self.__class__.__name__ - - d2 = db.DB(self.env) - d2.open(self.filename, "second", self.dbtype, -@@ -1032,11 +1148,12 @@ class CrashAndBurn(unittest.TestCase) : - # # See http://bugs.python.org/issue3307 - # self.assertRaises(db.DBInvalidArgError, db.DB, None, 65535) - -- def test02_DBEnv_dealloc(self): -- # http://bugs.python.org/issue3885 -- import gc -- self.assertRaises(db.DBInvalidArgError, db.DBEnv, ~db.DB_RPCCLIENT) -- gc.collect() -+ if db.version() < (4, 8) : -+ def test02_DBEnv_dealloc(self): -+ # http://bugs.python.org/issue3885 -+ import gc -+ self.assertRaises(db.DBInvalidArgError, db.DBEnv, ~db.DB_RPCCLIENT) -+ gc.collect() - - - #---------------------------------------------------------------------- -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test/test_compare.py Python-2.6.4/Lib/bsddb/test/test_compare.py ---- Python-2.6.4.orig/Lib/bsddb/test/test_compare.py 2008-08-31 10:00:51.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/test/test_compare.py 2009-12-04 07:36:00.000000000 -0500 -@@ -12,6 +12,12 @@ from test_all import db, dbshelve, test_ - get_new_environment_path, get_new_database_path - - -+# Needed for python 3. "cmp" vanished in 3.0.1 -+def cmp(a, b) : -+ if a==b : return 0 -+ if a<b : return -1 -+ return 1 -+ - lexical_cmp = cmp - - def lowercase_cmp(left, right): -@@ -26,6 +32,10 @@ _expected_lexical_test_data = ['', 'CCCP - _expected_lowercase_test_data = ['', 'a', 'aaa', 'b', 'c', 'CC', 'cccce', 'ccccf', 'CCCP'] - - class ComparatorTests (unittest.TestCase): -+ if sys.version_info[:3] < (2, 4, 0): -+ def assertTrue(self, expr, msg=None): -+ self.failUnless(expr,msg=msg) -+ - def comparator_test_helper (self, comparator, expected_data): - data = expected_data[:] - -@@ -47,7 +57,7 @@ class ComparatorTests (unittest.TestCase - data2.append(i) - data = data2 - -- self.failUnless (data == expected_data, -+ self.assertTrue (data == expected_data, - "comparator `%s' is not right: %s vs. %s" - % (comparator, expected_data, data)) - def test_lexical_comparator (self): -@@ -65,6 +75,10 @@ class AbstractBtreeKeyCompareTestCase (u - env = None - db = None - -+ if sys.version_info[:3] < (2, 4, 0): -+ def assertTrue(self, expr, msg=None): -+ self.failUnless(expr,msg=msg) -+ - def setUp (self): - self.filename = self.__class__.__name__ + '.db' - self.homeDir = get_new_environment_path() -@@ -115,14 +129,14 @@ class AbstractBtreeKeyCompareTestCase (u - rec = curs.first () - while rec: - key, ignore = rec -- self.failUnless (index < len (expected), -+ self.assertTrue(index < len (expected), - "to many values returned from cursor") -- self.failUnless (expected[index] == key, -+ self.assertTrue(expected[index] == key, - "expected value `%s' at %d but got `%s'" - % (expected[index], index, key)) - index = index + 1 - rec = curs.next () -- self.failUnless (index == len (expected), -+ self.assertTrue(index == len (expected), - "not enough values returned from cursor") - finally: - curs.close () -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test/test_compat.py Python-2.6.4/Lib/bsddb/test/test_compat.py ---- Python-2.6.4.orig/Lib/bsddb/test/test_compat.py 2009-07-02 11:37:21.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/test/test_compat.py 2009-12-04 07:36:00.000000000 -0500 -@@ -133,7 +133,7 @@ class CompatibilityTestCase(unittest.Tes - except KeyError: - pass - else: -- self.fail("set_location on non-existent key did not raise KeyError") -+ self.fail("set_location on non-existant key did not raise KeyError") - - f.sync() - f.close() -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test/test_dbenv.py Python-2.6.4/Lib/bsddb/test/test_dbenv.py ---- Python-2.6.4.orig/Lib/bsddb/test/test_dbenv.py 1969-12-31 19:00:00.000000000 -0500 -+++ Python-2.6.4/Lib/bsddb/test/test_dbenv.py 2009-12-04 07:36:00.000000000 -0500 -@@ -0,0 +1,148 @@ -+import unittest -+import os, glob -+ -+from test_all import db, test_support, get_new_environment_path, \ -+ get_new_database_path -+ -+#---------------------------------------------------------------------- -+ -+class DBEnv(unittest.TestCase): -+ def setUp(self): -+ self.homeDir = get_new_environment_path() -+ self.env = db.DBEnv() -+ -+ def tearDown(self): -+ del self.env -+ test_support.rmtree(self.homeDir) -+ -+ if db.version() >= (4, 2) : -+ def test_setget_data_dirs(self) : -+ dirs = ("a", "b", "c", "d") -+ for i in dirs : -+ self.env.set_data_dir(i) -+ self.assertEqual(dirs, self.env.get_data_dirs()) -+ -+ def test_setget_cachesize(self) : -+ cachesize = (0, 512*1024*1024, 3) -+ self.env.set_cachesize(*cachesize) -+ self.assertEqual(cachesize, self.env.get_cachesize()) -+ -+ cachesize = (0, 1*1024*1024, 5) -+ self.env.set_cachesize(*cachesize) -+ cachesize2 = self.env.get_cachesize() -+ self.assertEqual(cachesize[0], cachesize2[0]) -+ self.assertEqual(cachesize[2], cachesize2[2]) -+ # Berkeley DB expands the cache 25% accounting overhead, -+ # if the cache is small. -+ self.assertEqual(125, int(100.0*cachesize2[1]/cachesize[1])) -+ -+ # You can not change configuration after opening -+ # the environment. -+ self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) -+ cachesize = (0, 2*1024*1024, 1) -+ self.assertRaises(db.DBInvalidArgError, -+ self.env.set_cachesize, *cachesize) -+ self.assertEqual(cachesize2, self.env.get_cachesize()) -+ -+ def test_set_cachesize_dbenv_db(self) : -+ # You can not configure the cachesize using -+ # the database handle, if you are using an environment. -+ d = db.DB(self.env) -+ self.assertRaises(db.DBInvalidArgError, -+ d.set_cachesize, 0, 1024*1024, 1) -+ -+ def test_setget_shm_key(self) : -+ shm_key=137 -+ self.env.set_shm_key(shm_key) -+ self.assertEqual(shm_key, self.env.get_shm_key()) -+ self.env.set_shm_key(shm_key+1) -+ self.assertEqual(shm_key+1, self.env.get_shm_key()) -+ -+ # You can not change configuration after opening -+ # the environment. -+ self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) -+ # If we try to reconfigure cache after opening the -+ # environment, core dump. -+ self.assertRaises(db.DBInvalidArgError, -+ self.env.set_shm_key, shm_key) -+ self.assertEqual(shm_key+1, self.env.get_shm_key()) -+ -+ if db.version() >= (4, 4) : -+ def test_mutex_setget_max(self) : -+ v = self.env.mutex_get_max() -+ v2 = v*2+1 -+ -+ self.env.mutex_set_max(v2) -+ self.assertEqual(v2, self.env.mutex_get_max()) -+ -+ self.env.mutex_set_max(v) -+ self.assertEqual(v, self.env.mutex_get_max()) -+ -+ # You can not change configuration after opening -+ # the environment. -+ self.env.open(self.homeDir, db.DB_CREATE) -+ self.assertRaises(db.DBInvalidArgError, -+ self.env.mutex_set_max, v2) -+ -+ def test_mutex_setget_increment(self) : -+ v = self.env.mutex_get_increment() -+ v2 = 127 -+ -+ self.env.mutex_set_increment(v2) -+ self.assertEqual(v2, self.env.mutex_get_increment()) -+ -+ self.env.mutex_set_increment(v) -+ self.assertEqual(v, self.env.mutex_get_increment()) -+ -+ # You can not change configuration after opening -+ # the environment. -+ self.env.open(self.homeDir, db.DB_CREATE) -+ self.assertRaises(db.DBInvalidArgError, -+ self.env.mutex_set_increment, v2) -+ -+ def test_mutex_setget_tas_spins(self) : -+ self.env.mutex_set_tas_spins(0) # Default = BDB decides -+ v = self.env.mutex_get_tas_spins() -+ v2 = v*2+1 -+ -+ self.env.mutex_set_tas_spins(v2) -+ self.assertEqual(v2, self.env.mutex_get_tas_spins()) -+ -+ self.env.mutex_set_tas_spins(v) -+ self.assertEqual(v, self.env.mutex_get_tas_spins()) -+ -+ # In this case, you can change configuration -+ # after opening the environment. -+ self.env.open(self.homeDir, db.DB_CREATE) -+ self.env.mutex_set_tas_spins(v2) -+ -+ def test_mutex_setget_align(self) : -+ v = self.env.mutex_get_align() -+ v2 = 64 -+ if v == 64 : -+ v2 = 128 -+ -+ self.env.mutex_set_align(v2) -+ self.assertEqual(v2, self.env.mutex_get_align()) -+ -+ # Requires a nonzero power of two -+ self.assertRaises(db.DBInvalidArgError, -+ self.env.mutex_set_align, 0) -+ self.assertRaises(db.DBInvalidArgError, -+ self.env.mutex_set_align, 17) -+ -+ self.env.mutex_set_align(2*v2) -+ self.assertEqual(2*v2, self.env.mutex_get_align()) -+ -+ # You can not change configuration after opening -+ # the environment. -+ self.env.open(self.homeDir, db.DB_CREATE) -+ self.assertRaises(db.DBInvalidArgError, -+ self.env.mutex_set_align, v2) -+ -+ -+def test_suite(): -+ return unittest.makeSuite(DBEnv) -+ -+if __name__ == '__main__': -+ unittest.main(defaultTest='test_suite') -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test/test_dbtables.py Python-2.6.4/Lib/bsddb/test/test_dbtables.py ---- Python-2.6.4.orig/Lib/bsddb/test/test_dbtables.py 2008-08-31 10:00:51.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/test/test_dbtables.py 2009-12-04 07:36:00.000000000 -0500 -@@ -18,13 +18,17 @@ - # - # -- Gregory P. Smith greg@krypto.org - # --# $Id: test_dbtables.py 66088 2008-08-31 14:00:51Z jesus.cea $ -+# $Id: test_dbtables.py 58532 2007-10-18 07:56:54Z gregory.p.smith $ - --import os, re --try: -- import cPickle -- pickle = cPickle --except ImportError: -+import os, re, sys -+ -+if sys.version_info[0] < 3 : -+ try: -+ import cPickle -+ pickle = cPickle -+ except ImportError: -+ import pickle -+else : - import pickle - - import unittest -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test/test_distributed_transactions.py Python-2.6.4/Lib/bsddb/test/test_distributed_transactions.py ---- Python-2.6.4.orig/Lib/bsddb/test/test_distributed_transactions.py 2008-08-31 10:00:51.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/test/test_distributed_transactions.py 2009-12-04 07:36:00.000000000 -0500 -@@ -35,9 +35,9 @@ class DBTxn_distributed(unittest.TestCas - db.DB_INIT_TXN | db.DB_INIT_LOG | db.DB_INIT_MPOOL | - db.DB_INIT_LOCK, 0666) - self.db = db.DB(self.dbenv) -- self.db.set_re_len(db.DB_XIDDATASIZE) -+ self.db.set_re_len(db.DB_GID_SIZE) - if must_open_db : -- if db.version() > (4,1) : -+ if db.version() >= (4,2) : - txn=self.dbenv.txn_begin() - self.db.open(self.filename, - db.DB_QUEUE, db.DB_CREATE | db.DB_THREAD, 0666, -@@ -76,7 +76,7 @@ class DBTxn_distributed(unittest.TestCas - # let them be garbage collected. - for i in xrange(self.num_txns) : - txn = self.dbenv.txn_begin() -- gid = "%%%dd" %db.DB_XIDDATASIZE -+ gid = "%%%dd" %db.DB_GID_SIZE - gid = adapt(gid %i) - self.db.put(i, gid, txn=txn, flags=db.DB_APPEND) - txns.add(gid) -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test/test_early_close.py Python-2.6.4/Lib/bsddb/test/test_early_close.py ---- Python-2.6.4.orig/Lib/bsddb/test/test_early_close.py 2008-09-08 20:49:16.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/test/test_early_close.py 2009-12-04 07:36:00.000000000 -0500 -@@ -155,11 +155,8 @@ class DBEnvClosedEarlyCrash(unittest.Tes - db.DB_INIT_LOG | db.DB_CREATE) - d = db.DB(dbenv) - txn = dbenv.txn_begin() -- if db.version() < (4,1) : -- d.open(self.filename, dbtype = db.DB_HASH, flags = db.DB_CREATE) -- else : -- d.open(self.filename, dbtype = db.DB_HASH, flags = db.DB_CREATE, -- txn=txn) -+ d.open(self.filename, dbtype = db.DB_HASH, flags = db.DB_CREATE, -+ txn=txn) - d.put("XXX", "yyy", txn=txn) - txn.commit() - txn = dbenv.txn_begin() -@@ -168,9 +165,9 @@ class DBEnvClosedEarlyCrash(unittest.Tes - self.assertEquals(("XXX", "yyy"), c1.first()) - import warnings - # Not interested in warnings about implicit close. -- with warnings.catch_warnings(): -- warnings.simplefilter("ignore") -- txn.commit() -+ warnings.simplefilter("ignore") -+ txn.commit() -+ warnings.resetwarnings() - self.assertRaises(db.DBCursorClosedError, c2.first) - - if db.version() > (4,3,0) : -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test/test_fileid.py Python-2.6.4/Lib/bsddb/test/test_fileid.py ---- Python-2.6.4.orig/Lib/bsddb/test/test_fileid.py 1969-12-31 19:00:00.000000000 -0500 -+++ Python-2.6.4/Lib/bsddb/test/test_fileid.py 2009-12-04 07:36:00.000000000 -0500 -@@ -0,0 +1,63 @@ -+"""TestCase for reseting File ID. -+""" -+ -+import os -+import shutil -+import unittest -+ -+from test_all import db, test_support, get_new_environment_path, get_new_database_path -+ -+class FileidResetTestCase(unittest.TestCase): -+ def setUp(self): -+ self.db_path_1 = get_new_database_path() -+ self.db_path_2 = get_new_database_path() -+ self.db_env_path = get_new_environment_path() -+ -+ def test_fileid_reset(self): -+ # create DB 1 -+ self.db1 = db.DB() -+ self.db1.open(self.db_path_1, dbtype=db.DB_HASH, flags=(db.DB_CREATE|db.DB_EXCL)) -+ self.db1.put('spam', 'eggs') -+ self.db1.close() -+ -+ shutil.copy(self.db_path_1, self.db_path_2) -+ -+ self.db2 = db.DB() -+ self.db2.open(self.db_path_2, dbtype=db.DB_HASH) -+ self.db2.put('spam', 'spam') -+ self.db2.close() -+ -+ self.db_env = db.DBEnv() -+ self.db_env.open(self.db_env_path, db.DB_CREATE|db.DB_INIT_MPOOL) -+ -+ # use fileid_reset() here -+ self.db_env.fileid_reset(self.db_path_2) -+ -+ self.db1 = db.DB(self.db_env) -+ self.db1.open(self.db_path_1, dbtype=db.DB_HASH, flags=db.DB_RDONLY) -+ self.assertEquals(self.db1.get('spam'), 'eggs') -+ -+ self.db2 = db.DB(self.db_env) -+ self.db2.open(self.db_path_2, dbtype=db.DB_HASH, flags=db.DB_RDONLY) -+ self.assertEquals(self.db2.get('spam'), 'spam') -+ -+ self.db1.close() -+ self.db2.close() -+ -+ self.db_env.close() -+ -+ def tearDown(self): -+ test_support.unlink(self.db_path_1) -+ test_support.unlink(self.db_path_2) -+ test_support.rmtree(self.db_env_path) -+ -+def test_suite(): -+ suite = unittest.TestSuite() -+ if db.version() >= (4, 4): -+ suite.addTest(unittest.makeSuite(FileidResetTestCase)) -+ return suite -+ -+if __name__ == '__main__': -+ unittest.main(defaultTest='test_suite') -+ -+ -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test/test_lock.py Python-2.6.4/Lib/bsddb/test/test_lock.py ---- Python-2.6.4.orig/Lib/bsddb/test/test_lock.py 2009-01-26 16:53:32.000000000 -0500 -+++ Python-2.6.4/Lib/bsddb/test/test_lock.py 2009-12-04 07:36:00.000000000 -0500 -@@ -89,7 +89,18 @@ class LockingTestCase(unittest.TestCase) - for t in threads: - t.join() - -- def test03_lock_timeout(self): -+ if db.version() >= (4, 2) : -+ def test03_lock_timeout(self): -+ self.env.set_timeout(0, db.DB_SET_LOCK_TIMEOUT) -+ self.assertEqual(self.env.get_timeout(db.DB_SET_LOCK_TIMEOUT), 0) -+ self.env.set_timeout(0, db.DB_SET_TXN_TIMEOUT) -+ self.assertEqual(self.env.get_timeout(db.DB_SET_TXN_TIMEOUT), 0) -+ self.env.set_timeout(123456, db.DB_SET_LOCK_TIMEOUT) -+ self.assertEqual(self.env.get_timeout(db.DB_SET_LOCK_TIMEOUT), 123456) -+ self.env.set_timeout(7890123, db.DB_SET_TXN_TIMEOUT) -+ self.assertEqual(self.env.get_timeout(db.DB_SET_TXN_TIMEOUT), 7890123) -+ -+ def test04_lock_timeout2(self): - self.env.set_timeout(0, db.DB_SET_LOCK_TIMEOUT) - self.env.set_timeout(0, db.DB_SET_TXN_TIMEOUT) - self.env.set_timeout(123456, db.DB_SET_LOCK_TIMEOUT) -@@ -124,7 +135,7 @@ class LockingTestCase(unittest.TestCase) - self.env.lock_get,anID2, "shared lock", db.DB_LOCK_READ) - end_time=time.time() - deadlock_detection.end=True -- self.assertTrue((end_time-start_time) >= 0.0999) -+ self.assertTrue((end_time-start_time) >= 0.1) - self.env.lock_put(lock) - t.join() - -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test/test_pickle.py Python-2.6.4/Lib/bsddb/test/test_pickle.py ---- Python-2.6.4.orig/Lib/bsddb/test/test_pickle.py 2008-08-31 10:00:51.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/test/test_pickle.py 2009-12-04 07:36:00.000000000 -0500 -@@ -1,10 +1,16 @@ - - import os - import pickle --try: -- import cPickle --except ImportError: -+import sys -+ -+if sys.version_info[0] < 3 : -+ try: -+ import cPickle -+ except ImportError: -+ cPickle = None -+else : - cPickle = None -+ - import unittest - - from test_all import db, test_support, get_new_environment_path, get_new_database_path -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test/test_recno.py Python-2.6.4/Lib/bsddb/test/test_recno.py ---- Python-2.6.4.orig/Lib/bsddb/test/test_recno.py 2009-07-02 11:37:21.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/test/test_recno.py 2009-12-04 07:36:00.000000000 -0500 -@@ -150,7 +150,7 @@ class SimpleRecnoTestCase(unittest.TestC - if verbose: - print rec - -- # test that non-existent key lookups work (and that -+ # test that non-existant key lookups work (and that - # DBC_set_range doesn't have a memleak under valgrind) - rec = c.set_range(999999) - self.assertEqual(rec, None) -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test/test_replication.py Python-2.6.4/Lib/bsddb/test/test_replication.py ---- Python-2.6.4.orig/Lib/bsddb/test/test_replication.py 2008-09-17 22:47:35.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/test/test_replication.py 2009-12-04 07:36:00.000000000 -0500 -@@ -119,19 +119,7 @@ class DBReplicationManager(unittest.Test - timeout = time.time()+10 - while (time.time()<timeout) and not (self.confirmed_master and self.client_startupdone) : - time.sleep(0.02) -- # this fails on Windows as self.client_startupdone never gets set -- # to True - see bug 3892. BUT - even though this assertion -- # fails on Windows the rest of the test passes - so to prove -- # that we let the rest of the test run. Sadly we can't -- # make use of raising TestSkipped() here (unittest still -- # reports it as an error), so we yell to stderr. -- import sys -- if sys.platform=="win32": -- print >> sys.stderr, \ -- "XXX - windows bsddb replication fails on windows and is skipped" -- print >> sys.stderr, "XXX - Please see issue #3892" -- else: -- self.assertTrue(time.time()<timeout) -+ self.assertTrue(time.time()<timeout) - - d = self.dbenvMaster.repmgr_site_list() - self.assertEquals(len(d), 1) -@@ -340,6 +328,9 @@ class DBBaseReplication(DBReplicationMan - txn.commit() - break - -+ d = self.dbenvMaster.rep_stat(flags=db.DB_STAT_CLEAR); -+ self.assertTrue("master_changes" in d) -+ - txn=self.dbenvMaster.txn_begin() - self.dbMaster.put("ABC", "123", txn=txn) - txn.commit() -@@ -430,6 +421,14 @@ class DBBaseReplication(DBReplicationMan - - self.assertTrue(self.confirmed_master) - -+ if db.version() >= (4,7) : -+ def test04_test_clockskew(self) : -+ fast, slow = 1234, 1230 -+ self.dbenvMaster.rep_set_clockskew(fast, slow) -+ self.assertEqual((fast, slow), -+ self.dbenvMaster.rep_get_clockskew()) -+ self.basic_rep_threading() -+ - #---------------------------------------------------------------------- - - def test_suite(): -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test/test_sequence.py Python-2.6.4/Lib/bsddb/test/test_sequence.py ---- Python-2.6.4.orig/Lib/bsddb/test/test_sequence.py 2008-08-31 10:00:51.000000000 -0400 -+++ Python-2.6.4/Lib/bsddb/test/test_sequence.py 2009-12-04 07:36:00.000000000 -0500 -@@ -37,7 +37,7 @@ class DBSequenceTest(unittest.TestCase): - self.seq = db.DBSequence(self.d, flags=0) - start_value = 10 * self.int_32_max - self.assertEqual(0xA00000000, start_value) -- self.assertEquals(None, self.seq.init_value(start_value)) -+ self.assertEquals(None, self.seq.initial_value(start_value)) - self.assertEquals(None, self.seq.open(key='id', txn=None, flags=db.DB_CREATE)) - self.assertEquals(start_value, self.seq.get(5)) - self.assertEquals(start_value + 5, self.seq.get()) -@@ -77,7 +77,7 @@ class DBSequenceTest(unittest.TestCase): - self.seq = db.DBSequence(self.d, flags=0) - seq_range = (10 * self.int_32_max, 11 * self.int_32_max - 1) - self.assertEquals(None, self.seq.set_range(seq_range)) -- self.seq.init_value(seq_range[0]) -+ self.seq.initial_value(seq_range[0]) - self.assertEquals(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) - self.assertEquals(seq_range, self.seq.get_range()) - -@@ -110,7 +110,7 @@ class DBSequenceTest(unittest.TestCase): - value_minus=(-1L<<63)+1 # Two complement - self.assertEquals(-9223372036854775807L,value_minus) - self.seq = db.DBSequence(self.d, flags=0) -- self.assertEquals(None, self.seq.init_value(value_plus-1)) -+ self.assertEquals(None, self.seq.initial_value(value_plus-1)) - self.assertEquals(None, self.seq.open(key='id', txn=None, - flags=db.DB_CREATE)) - self.assertEquals(value_plus-1, self.seq.get(1)) -@@ -119,7 +119,7 @@ class DBSequenceTest(unittest.TestCase): - self.seq.remove(txn=None, flags=0) - - self.seq = db.DBSequence(self.d, flags=0) -- self.assertEquals(None, self.seq.init_value(value_minus)) -+ self.assertEquals(None, self.seq.initial_value(value_minus)) - self.assertEquals(None, self.seq.open(key='id', txn=None, - flags=db.DB_CREATE)) - self.assertEquals(value_minus, self.seq.get(1)) -diff -Nupr Python-2.6.4.orig/Lib/bsddb/test_support.py Python-2.6.4/Lib/bsddb/test_support.py ---- Python-2.6.4.orig/Lib/bsddb/test_support.py 1969-12-31 19:00:00.000000000 -0500 -+++ Python-2.6.4/Lib/bsddb/test_support.py 2009-12-04 07:36:00.000000000 -0500 -@@ -0,0 +1,54 @@ -+# This module is a bridge. -+# -+# Code is copied from Python 2.6 (trunk) Lib/test/test_support.py that -+# the bsddb test suite needs even when run standalone on a python -+# version that may not have all of these. -+ -+# DO NOT ADD NEW UNIQUE CODE. Copy code from the python trunk -+# trunk test_support module into here. If you need a place for your -+# own stuff specific to bsddb tests, make a bsddb.test.foo module. -+ -+import errno -+import os -+import shutil -+import socket -+ -+def unlink(filename): -+ try: -+ os.unlink(filename) -+ except OSError: -+ pass -+ -+def rmtree(path): -+ try: -+ shutil.rmtree(path) -+ except OSError, e: -+ # Unix returns ENOENT, Windows returns ESRCH. -+ if e.errno not in (errno.ENOENT, errno.ESRCH): -+ raise -+ -+def find_unused_port(family=socket.AF_INET, socktype=socket.SOCK_STREAM): -+ tempsock = socket.socket(family, socktype) -+ port = bind_port(tempsock, family=family, socktype=socktype) -+ tempsock.close() -+ del tempsock -+ return port -+ -+HOST = 'localhost' -+def bind_port(sock, family, socktype, host=HOST): -+ if family == socket.AF_INET and type == socket.SOCK_STREAM: -+ if hasattr(socket, 'SO_REUSEADDR'): -+ if sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) == 1: -+ raise TestFailed("tests should never set the SO_REUSEADDR " \ -+ "socket option on TCP/IP sockets!") -+ if hasattr(socket, 'SO_REUSEPORT'): -+ if sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT) == 1: -+ raise TestFailed("tests should never set the SO_REUSEPORT " \ -+ "socket option on TCP/IP sockets!") -+ if hasattr(socket, 'SO_EXCLUSIVEADDRUSE'): -+ sock.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1) -+ -+ sock.bind((host, 0)) -+ port = sock.getsockname()[1] -+ return port -+ -diff -Nupr Python-2.6.4.orig/Modules/_bsddb.c Python-2.6.4/Modules/_bsddb.c ---- Python-2.6.4.orig/Modules/_bsddb.c 2008-09-23 14:54:08.000000000 -0400 -+++ Python-2.6.4/Modules/_bsddb.c 2009-12-04 07:34:56.000000000 -0500 -@@ -95,7 +95,7 @@ - #include "bsddb.h" - #undef COMPILING_BSDDB_C - --static char *rcs_id = "$Id: _bsddb.c 66568 2008-09-23 18:54:08Z jesus.cea $"; -+static char *rcs_id = "$Id: _bsddb.c 732 2009-11-05 19:39:04Z jcea $"; - - /* --------------------------------------------------------------------- */ - /* Various macro definitions */ -@@ -169,9 +169,6 @@ static PyInterpreterState* _db_interpret - - #endif - --/* Should DB_INCOMPLETE be turned into a warning or an exception? */ --#define INCOMPLETE_IS_WARNING 1 -- - /* --------------------------------------------------------------------- */ - /* Exceptions */ - -@@ -191,10 +188,6 @@ static PyObject* DBNoServerIDError; - static PyObject* DBPageNotFoundError; /* DB_PAGE_NOTFOUND */ - static PyObject* DBSecondaryBadError; /* DB_SECONDARY_BAD */ - --#if !INCOMPLETE_IS_WARNING --static PyObject* DBIncompleteError; /* DB_INCOMPLETE */ --#endif -- - static PyObject* DBInvalidArgError; /* EINVAL */ - static PyObject* DBAccessError; /* EACCES */ - static PyObject* DBNoSpaceError; /* ENOSPC */ -@@ -208,6 +201,13 @@ static PyObject* DBPermissionsError; - #if (DBVER >= 42) - static PyObject* DBRepHandleDeadError; /* DB_REP_HANDLE_DEAD */ - #endif -+#if (DBVER >= 44) -+static PyObject* DBRepLockoutError; /* DB_REP_LOCKOUT */ -+#endif -+ -+#if (DBVER >= 46) -+static PyObject* DBRepLeaseExpiredError; /* DB_REP_LEASE_EXPIRED */ -+#endif - - static PyObject* DBRepUnavailError; /* DB_REP_UNAVAIL */ - -@@ -215,6 +215,10 @@ static PyObject* DBRepUnavailError; - #define DB_BUFFER_SMALL ENOMEM - #endif - -+#if (DBVER < 48) -+#define DB_GID_SIZE DB_XIDDATASIZE -+#endif -+ - - /* --------------------------------------------------------------------- */ - /* Structure definitions */ -@@ -667,27 +671,8 @@ static int makeDBError(int err) - unsigned int bytes_left; - - switch (err) { -- case 0: /* successful, no error */ break; -- --#if (DBVER < 41) -- case DB_INCOMPLETE: --#if INCOMPLETE_IS_WARNING -- bytes_left = our_strlcpy(errTxt, db_strerror(err), sizeof(errTxt)); -- /* Ensure that bytes_left never goes negative */ -- if (_db_errmsg[0] && bytes_left < (sizeof(errTxt) - 4)) { -- bytes_left = sizeof(errTxt) - bytes_left - 4 - 1; -- assert(bytes_left >= 0); -- strcat(errTxt, " -- "); -- strncat(errTxt, _db_errmsg, bytes_left); -- } -- _db_errmsg[0] = 0; -- exceptionRaised = PyErr_Warn(PyExc_RuntimeWarning, errTxt); -- --#else /* do an exception instead */ -- errObj = DBIncompleteError; --#endif -- break; --#endif /* DBVER < 41 */ -+ case 0: /* successful, no error */ -+ return 0; - - case DB_KEYEMPTY: errObj = DBKeyEmptyError; break; - case DB_KEYEXIST: errObj = DBKeyExistError; break; -@@ -720,6 +705,13 @@ static int makeDBError(int err) - #if (DBVER >= 42) - case DB_REP_HANDLE_DEAD : errObj = DBRepHandleDeadError; break; - #endif -+#if (DBVER >= 44) -+ case DB_REP_LOCKOUT : errObj = DBRepLockoutError; break; -+#endif -+ -+#if (DBVER >= 46) -+ case DB_REP_LEASE_EXPIRED : errObj = DBRepLeaseExpiredError; break; -+#endif - - case DB_REP_UNAVAIL : errObj = DBRepUnavailError; break; - -@@ -1417,10 +1409,70 @@ _db_associateCallback(DB* db, const DBT* - PyErr_Print(); - } - } -+#if (DBVER >= 46) -+ else if (PyList_Check(result)) -+ { -+ char* data; -+ Py_ssize_t size; -+ int i, listlen; -+ DBT* dbts; -+ -+ listlen = PyList_Size(result); -+ -+ dbts = (DBT *)malloc(sizeof(DBT) * listlen); -+ -+ for (i=0; i<listlen; i++) -+ { -+ if (!PyBytes_Check(PyList_GetItem(result, i))) -+ { -+ PyErr_SetString( -+ PyExc_TypeError, -+#if (PY_VERSION_HEX < 0x03000000) -+"The list returned by DB->associate callback should be a list of strings."); -+#else -+"The list returned by DB->associate callback should be a list of bytes."); -+#endif -+ PyErr_Print(); -+ } -+ -+ PyBytes_AsStringAndSize( -+ PyList_GetItem(result, i), -+ &data, &size); -+ -+ CLEAR_DBT(dbts[i]); -+ dbts[i].data = malloc(size); /* TODO, check this */ -+ -+ if (dbts[i].data) -+ { -+ memcpy(dbts[i].data, data, size); -+ dbts[i].size = size; -+ dbts[i].ulen = dbts[i].size; -+ dbts[i].flags = DB_DBT_APPMALLOC; /* DB will free */ -+ } -+ else -+ { -+ PyErr_SetString(PyExc_MemoryError, -+ "malloc failed in _db_associateCallback (list)"); -+ PyErr_Print(); -+ } -+ } -+ -+ CLEAR_DBT(*secKey); -+ -+ secKey->data = dbts; -+ secKey->size = listlen; -+ secKey->flags = DB_DBT_APPMALLOC | DB_DBT_MULTIPLE; -+ retval = 0; -+ } -+#endif - else { - PyErr_SetString( - PyExc_TypeError, -- "DB associate callback should return DB_DONOTINDEX or string."); -+#if (PY_VERSION_HEX < 0x03000000) -+"DB associate callback should return DB_DONOTINDEX/string/list of strings."); -+#else -+"DB associate callback should return DB_DONOTINDEX/bytes/list of bytes."); -+#endif - PyErr_Print(); - } - -@@ -1439,29 +1491,18 @@ DB_associate(DBObject* self, PyObject* a - int err, flags=0; - DBObject* secondaryDB; - PyObject* callback; --#if (DBVER >= 41) - PyObject *txnobj = NULL; - DB_TXN *txn = NULL; - static char* kwnames[] = {"secondaryDB", "callback", "flags", "txn", - NULL}; --#else -- static char* kwnames[] = {"secondaryDB", "callback", "flags", NULL}; --#endif - --#if (DBVER >= 41) - if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|iO:associate", kwnames, - &secondaryDB, &callback, &flags, - &txnobj)) { --#else -- if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|i:associate", kwnames, -- &secondaryDB, &callback, &flags)) { --#endif - return NULL; - } - --#if (DBVER >= 41) - if (!checkTxnObj(txnobj, &txn)) return NULL; --#endif - - CHECK_DB_NOT_CLOSED(self); - if (!DBObject_Check(secondaryDB)) { -@@ -1497,18 +1538,11 @@ DB_associate(DBObject* self, PyObject* a - PyEval_InitThreads(); - #endif - MYDB_BEGIN_ALLOW_THREADS; --#if (DBVER >= 41) - err = self->db->associate(self->db, - txn, - secondaryDB->db, - _db_associateCallback, - flags); --#else -- err = self->db->associate(self->db, -- secondaryDB->db, -- _db_associateCallback, -- flags); --#endif - MYDB_END_ALLOW_THREADS; - - if (err) { -@@ -1701,6 +1735,64 @@ DB_delete(DBObject* self, PyObject* args - } - - -+#if (DBVER >= 47) -+/* -+** This function is available since Berkeley DB 4.4, -+** but 4.6 version is so buggy that we only support -+** it from BDB 4.7 and newer. -+*/ -+static PyObject* -+DB_compact(DBObject* self, PyObject* args, PyObject* kwargs) -+{ -+ PyObject* txnobj = NULL; -+ PyObject *startobj = NULL, *stopobj = NULL; -+ int flags = 0; -+ DB_TXN *txn = NULL; -+ DBT *start_p = NULL, *stop_p = NULL; -+ DBT start, stop; -+ int err; -+ DB_COMPACT c_data = { 0 }; -+ static char* kwnames[] = { "txn", "start", "stop", "flags", -+ "compact_fillpercent", "compact_pages", -+ "compact_timeout", NULL }; -+ -+ -+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|OOOiiiI:compact", kwnames, -+ &txnobj, &startobj, &stopobj, &flags, -+ &c_data.compact_fillpercent, -+ &c_data.compact_pages, -+ &c_data.compact_timeout)) -+ return NULL; -+ -+ CHECK_DB_NOT_CLOSED(self); -+ if (!checkTxnObj(txnobj, &txn)) { -+ return NULL; -+ } -+ -+ if (startobj && make_key_dbt(self, startobj, &start, NULL)) { -+ start_p = &start; -+ } -+ if (stopobj && make_key_dbt(self, stopobj, &stop, NULL)) { -+ stop_p = &stop; -+ } -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db->compact(self->db, txn, start_p, stop_p, &c_data, -+ flags, NULL); -+ MYDB_END_ALLOW_THREADS; -+ -+ if (startobj) -+ FREE_DBT(start); -+ if (stopobj) -+ FREE_DBT(stop); -+ -+ RETURN_IF_ERR(); -+ -+ return PyLong_FromUnsignedLong(c_data.compact_pages_truncated); -+} -+#endif -+ -+ - static PyObject* - DB_fd(DBObject* self) - { -@@ -1716,6 +1808,55 @@ DB_fd(DBObject* self) - } - - -+#if (DBVER >= 46) -+static PyObject* -+DB_exists(DBObject* self, PyObject* args, PyObject* kwargs) -+{ -+ int err, flags=0; -+ PyObject* txnobj = NULL; -+ PyObject* keyobj; -+ DBT key; -+ DB_TXN *txn; -+ -+ static char* kwnames[] = {"key", "txn", "flags", NULL}; -+ -+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|Oi:exists", kwnames, -+ &keyobj, &txnobj, &flags)) -+ return NULL; -+ -+ CHECK_DB_NOT_CLOSED(self); -+ if (!make_key_dbt(self, keyobj, &key, NULL)) -+ return NULL; -+ if (!checkTxnObj(txnobj, &txn)) { -+ FREE_DBT(key); -+ return NULL; -+ } -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db->exists(self->db, txn, &key, flags); -+ MYDB_END_ALLOW_THREADS; -+ -+ FREE_DBT(key); -+ -+ if (!err) { -+ Py_INCREF(Py_True); -+ return Py_True; -+ } -+ if ((err == DB_NOTFOUND || err == DB_KEYEMPTY)) { -+ Py_INCREF(Py_False); -+ return Py_False; -+ } -+ -+ /* -+ ** If we reach there, there was an error. The -+ ** "return" should be unreachable. -+ */ -+ RETURN_IF_ERR(); -+ assert(0); /* This coude SHOULD be unreachable */ -+ return NULL; -+} -+#endif -+ - static PyObject* - DB_get(DBObject* self, PyObject* args, PyObject* kwargs) - { -@@ -2114,7 +2255,6 @@ DB_open(DBObject* self, PyObject* args, - int err, type = DB_UNKNOWN, flags=0, mode=0660; - char* filename = NULL; - char* dbname = NULL; --#if (DBVER >= 41) - PyObject *txnobj = NULL; - DB_TXN *txn = NULL; - /* with dbname */ -@@ -2123,45 +2263,22 @@ DB_open(DBObject* self, PyObject* args, - /* without dbname */ - static char* kwnames_basic[] = { - "filename", "dbtype", "flags", "mode", "txn", NULL}; --#else -- /* with dbname */ -- static char* kwnames[] = { -- "filename", "dbname", "dbtype", "flags", "mode", NULL}; -- /* without dbname */ -- static char* kwnames_basic[] = { -- "filename", "dbtype", "flags", "mode", NULL}; --#endif - --#if (DBVER >= 41) - if (!PyArg_ParseTupleAndKeywords(args, kwargs, "z|ziiiO:open", kwnames, - &filename, &dbname, &type, &flags, &mode, - &txnobj)) --#else -- if (!PyArg_ParseTupleAndKeywords(args, kwargs, "z|ziii:open", kwnames, -- &filename, &dbname, &type, &flags, -- &mode)) --#endif - { - PyErr_Clear(); - type = DB_UNKNOWN; flags = 0; mode = 0660; - filename = NULL; dbname = NULL; --#if (DBVER >= 41) - if (!PyArg_ParseTupleAndKeywords(args, kwargs,"z|iiiO:open", - kwnames_basic, - &filename, &type, &flags, &mode, - &txnobj)) - return NULL; --#else -- if (!PyArg_ParseTupleAndKeywords(args, kwargs,"z|iii:open", -- kwnames_basic, -- &filename, &type, &flags, &mode)) -- return NULL; --#endif - } - --#if (DBVER >= 41) - if (!checkTxnObj(txnobj, &txn)) return NULL; --#endif - - if (NULL == self->db) { - PyObject *t = Py_BuildValue("(is)", 0, -@@ -2173,24 +2290,17 @@ DB_open(DBObject* self, PyObject* args, - return NULL; - } - --#if (DBVER >= 41) - if (txn) { /* Can't use 'txnobj' because could be 'txnobj==Py_None' */ - INSERT_IN_DOUBLE_LINKED_LIST_TXN(((DBTxnObject *)txnobj)->children_dbs,self); - self->txn=(DBTxnObject *)txnobj; - } else { - self->txn=NULL; - } --#else -- self->txn=NULL; --#endif - - MYDB_BEGIN_ALLOW_THREADS; --#if (DBVER >= 41) - err = self->db->open(self->db, txn, filename, dbname, type, flags, mode); --#else -- err = self->db->open(self->db, filename, dbname, type, flags, mode); --#endif - MYDB_END_ALLOW_THREADS; -+ - if (makeDBError(err)) { - PyObject *dummy; - -@@ -2490,6 +2600,25 @@ DB_set_cachesize(DBObject* self, PyObjec - RETURN_NONE(); - } - -+#if (DBVER >= 42) -+static PyObject* -+DB_get_cachesize(DBObject* self) -+{ -+ int err; -+ u_int32_t gbytes, bytes; -+ int ncache; -+ -+ CHECK_DB_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db->get_cachesize(self->db, &gbytes, &bytes, &ncache); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ -+ return Py_BuildValue("(iii)", gbytes, bytes, ncache); -+} -+#endif - - static PyObject* - DB_set_flags(DBObject* self, PyObject* args) -@@ -2730,9 +2859,6 @@ DB_stat(DBObject* self, PyObject* args, - MAKE_HASH_ENTRY(pagecnt); - #endif - MAKE_HASH_ENTRY(pagesize); --#if (DBVER < 41) -- MAKE_HASH_ENTRY(nelem); --#endif - MAKE_HASH_ENTRY(ffactor); - MAKE_HASH_ENTRY(buckets); - MAKE_HASH_ENTRY(free); -@@ -2779,9 +2905,7 @@ DB_stat(DBObject* self, PyObject* args, - MAKE_QUEUE_ENTRY(nkeys); - MAKE_QUEUE_ENTRY(ndata); - MAKE_QUEUE_ENTRY(pagesize); --#if (DBVER >= 41) - MAKE_QUEUE_ENTRY(extentsize); --#endif - MAKE_QUEUE_ENTRY(pages); - MAKE_QUEUE_ENTRY(re_len); - MAKE_QUEUE_ENTRY(re_pad); -@@ -2892,7 +3016,7 @@ DB_verify(DBObject* self, PyObject* args - PyObject *error; - - error=DB_close_internal(self, 0, 1); -- if (error ) { -+ if (error) { - return error; - } - } -@@ -2930,7 +3054,6 @@ DB_set_get_returns_none(DBObject* self, - return NUMBER_FromLong(oldValue); - } - --#if (DBVER >= 41) - static PyObject* - DB_set_encrypt(DBObject* self, PyObject* args, PyObject* kwargs) - { -@@ -2951,7 +3074,24 @@ DB_set_encrypt(DBObject* self, PyObject* - RETURN_IF_ERR(); - RETURN_NONE(); - } --#endif /* DBVER >= 41 */ -+ -+#if (DBVER >= 42) -+static PyObject* -+DB_get_encrypt_flags(DBObject* self) -+{ -+ int err; -+ u_int32_t flags; -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db->get_encrypt_flags(self->db, &flags); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ -+ return NUMBER_FromLong(flags); -+} -+#endif -+ - - - /*-------------------------------------------------------------- */ -@@ -3097,18 +3237,11 @@ DB_ass_sub(DBObject* self, PyObject* key - - - static PyObject* --DB_has_key(DBObject* self, PyObject* args, PyObject* kwargs) -+_DB_has_key(DBObject* self, PyObject* keyobj, PyObject* txnobj) - { - int err; -- PyObject* keyobj; -- DBT key, data; -- PyObject* txnobj = NULL; -+ DBT key; - DB_TXN *txn = NULL; -- static char* kwnames[] = {"key","txn", NULL}; -- -- if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|O:has_key", kwnames, -- &keyobj, &txnobj)) -- return NULL; - - CHECK_DB_NOT_CLOSED(self); - if (!make_key_dbt(self, keyobj, &key, NULL)) -@@ -3118,28 +3251,77 @@ DB_has_key(DBObject* self, PyObject* arg - return NULL; - } - -+#if (DBVER < 46) - /* This causes DB_BUFFER_SMALL to be returned when the db has the key because - it has a record but can't allocate a buffer for the data. This saves - having to deal with data we won't be using. - */ -- CLEAR_DBT(data); -- data.flags = DB_DBT_USERMEM; -+ { -+ DBT data ; -+ CLEAR_DBT(data); -+ data.flags = DB_DBT_USERMEM; - -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db->get(self->db, txn, &key, &data, 0); -+ MYDB_END_ALLOW_THREADS; -+ } -+#else - MYDB_BEGIN_ALLOW_THREADS; -- err = self->db->get(self->db, txn, &key, &data, 0); -+ err = self->db->exists(self->db, txn, &key, 0); - MYDB_END_ALLOW_THREADS; -+#endif -+ - FREE_DBT(key); - -+ /* -+ ** DB_BUFFER_SMALL is only used if we use "get". -+ ** We can drop it when we only use "exists", -+ ** when we drop suport for Berkeley DB < 4.6. -+ */ - if (err == DB_BUFFER_SMALL || err == 0) { -- return NUMBER_FromLong(1); -+ Py_INCREF(Py_True); -+ return Py_True; - } else if (err == DB_NOTFOUND || err == DB_KEYEMPTY) { -- return NUMBER_FromLong(0); -+ Py_INCREF(Py_False); -+ return Py_False; - } - - makeDBError(err); - return NULL; - } - -+static PyObject* -+DB_has_key(DBObject* self, PyObject* args, PyObject* kwargs) -+{ -+ PyObject* keyobj; -+ PyObject* txnobj = NULL; -+ static char* kwnames[] = {"key","txn", NULL}; -+ -+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|O:has_key", kwnames, -+ &keyobj, &txnobj)) -+ return NULL; -+ -+ return _DB_has_key(self, keyobj, txnobj); -+} -+ -+ -+static int DB_contains(DBObject* self, PyObject* keyobj) -+{ -+ PyObject* result; -+ int result2 = 0; -+ -+ result = _DB_has_key(self, keyobj, NULL) ; -+ if (result == NULL) { -+ return -1; /* Propague exception */ -+ } -+ if (result != Py_False) { -+ result2 = 1; -+ } -+ -+ Py_DECREF(result); -+ return result2; -+} -+ - - #define _KEYS_LIST 1 - #define _VALUES_LIST 2 -@@ -3970,6 +4152,13 @@ DBC_next_nodup(DBCursorObject* self, PyO - return _DBCursor_get(self,DB_NEXT_NODUP,args,kwargs,"|iii:next_nodup"); - } - -+#if (DBVER >= 46) -+static PyObject* -+DBC_prev_dup(DBCursorObject* self, PyObject* args, PyObject *kwargs) -+{ -+ return _DBCursor_get(self,DB_PREV_DUP,args,kwargs,"|iii:prev_dup"); -+} -+#endif - - static PyObject* - DBC_prev_nodup(DBCursorObject* self, PyObject* args, PyObject *kwargs) -@@ -4012,6 +4201,44 @@ DBC_join_item(DBCursorObject* self, PyOb - } - - -+#if (DBVER >= 46) -+static PyObject* -+DBC_set_priority(DBCursorObject* self, PyObject* args, PyObject* kwargs) -+{ -+ int err, priority; -+ static char* kwnames[] = { "priority", NULL }; -+ -+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i:set_priority", kwnames, -+ &priority)) -+ return NULL; -+ -+ CHECK_CURSOR_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->dbc->set_priority(self->dbc, priority); -+ MYDB_END_ALLOW_THREADS; -+ RETURN_IF_ERR(); -+ RETURN_NONE(); -+} -+ -+ -+static PyObject* -+DBC_get_priority(DBCursorObject* self) -+{ -+ int err; -+ DB_CACHE_PRIORITY priority; -+ -+ CHECK_CURSOR_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->dbc->get_priority(self->dbc, &priority); -+ MYDB_END_ALLOW_THREADS; -+ RETURN_IF_ERR(); -+ return NUMBER_FromLong(priority); -+} -+#endif -+ -+ - - /* --------------------------------------------------------------------- */ - /* DBEnv methods */ -@@ -4095,7 +4322,6 @@ DBEnv_remove(DBEnvObject* self, PyObject - RETURN_NONE(); - } - --#if (DBVER >= 41) - static PyObject* - DBEnv_dbremove(DBEnvObject* self, PyObject* args, PyObject* kwargs) - { -@@ -4152,6 +4378,8 @@ DBEnv_dbrename(DBEnvObject* self, PyObje - RETURN_NONE(); - } - -+ -+ - static PyObject* - DBEnv_set_encrypt(DBEnvObject* self, PyObject* args, PyObject* kwargs) - { -@@ -4172,17 +4400,57 @@ DBEnv_set_encrypt(DBEnvObject* self, PyO - RETURN_IF_ERR(); - RETURN_NONE(); - } --#endif /* DBVER >= 41 */ - -+#if (DBVER >= 42) - static PyObject* --DBEnv_set_timeout(DBEnvObject* self, PyObject* args, PyObject* kwargs) -+DBEnv_get_encrypt_flags(DBEnvObject* self) - { - int err; -- u_int32_t flags=0; -- u_int32_t timeout = 0; -- static char* kwnames[] = { "timeout", "flags", NULL }; -+ u_int32_t flags; - -- if (!PyArg_ParseTupleAndKeywords(args, kwargs, "ii:set_timeout", kwnames, -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->get_encrypt_flags(self->db_env, &flags); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ -+ return NUMBER_FromLong(flags); -+} -+ -+static PyObject* -+DBEnv_get_timeout(DBEnvObject* self, PyObject* args, PyObject* kwargs) -+{ -+ int err; -+ int flag; -+ u_int32_t timeout; -+ static char* kwnames[] = {"flag", NULL }; -+ -+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i:get_timeout", kwnames, -+ &flag)) { -+ return NULL; -+ } -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->get_timeout(self->db_env, &timeout, flag); -+ MYDB_END_ALLOW_THREADS; -+ RETURN_IF_ERR(); -+ return NUMBER_FromLong(timeout); -+} -+#endif -+ -+ -+static PyObject* -+DBEnv_set_timeout(DBEnvObject* self, PyObject* args, PyObject* kwargs) -+{ -+ int err; -+ u_int32_t flags=0; -+ u_int32_t timeout = 0; -+ static char* kwnames[] = { "timeout", "flags", NULL }; -+ -+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "ii:set_timeout", kwnames, - &timeout, &flags)) { - return NULL; - } -@@ -4210,6 +4478,25 @@ DBEnv_set_shm_key(DBEnvObject* self, PyO - RETURN_NONE(); - } - -+#if (DBVER >= 42) -+static PyObject* -+DBEnv_get_shm_key(DBEnvObject* self) -+{ -+ int err; -+ long shm_key; -+ -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->get_shm_key(self->db_env, &shm_key); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ -+ return NUMBER_FromLong(shm_key); -+} -+#endif -+ - static PyObject* - DBEnv_set_cachesize(DBEnvObject* self, PyObject* args) - { -@@ -4227,6 +4514,26 @@ DBEnv_set_cachesize(DBEnvObject* self, P - RETURN_NONE(); - } - -+#if (DBVER >= 42) -+static PyObject* -+DBEnv_get_cachesize(DBEnvObject* self) -+{ -+ int err; -+ u_int32_t gbytes, bytes; -+ int ncache; -+ -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->get_cachesize(self->db_env, &gbytes, &bytes, &ncache); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ -+ return Py_BuildValue("(iii)", gbytes, bytes, ncache); -+} -+#endif -+ - - static PyObject* - DBEnv_set_flags(DBEnvObject* self, PyObject* args) -@@ -4265,6 +4572,151 @@ DBEnv_log_set_config(DBEnvObject* self, - } - #endif /* DBVER >= 47 */ - -+#if (DBVER >= 44) -+static PyObject* -+DBEnv_mutex_set_max(DBEnvObject* self, PyObject* args) -+{ -+ int err; -+ int value; -+ -+ if (!PyArg_ParseTuple(args, "i:mutex_set_max", &value)) -+ return NULL; -+ -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->mutex_set_max(self->db_env, value); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ RETURN_NONE(); -+} -+ -+static PyObject* -+DBEnv_mutex_get_max(DBEnvObject* self) -+{ -+ int err; -+ u_int32_t value; -+ -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->mutex_get_max(self->db_env, &value); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ -+ return NUMBER_FromLong(value); -+} -+ -+static PyObject* -+DBEnv_mutex_set_align(DBEnvObject* self, PyObject* args) -+{ -+ int err; -+ int align; -+ -+ if (!PyArg_ParseTuple(args, "i:mutex_set_align", &align)) -+ return NULL; -+ -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->mutex_set_align(self->db_env, align); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ RETURN_NONE(); -+} -+ -+static PyObject* -+DBEnv_mutex_get_align(DBEnvObject* self) -+{ -+ int err; -+ u_int32_t align; -+ -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->mutex_get_align(self->db_env, &align); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ -+ return NUMBER_FromLong(align); -+} -+ -+static PyObject* -+DBEnv_mutex_set_increment(DBEnvObject* self, PyObject* args) -+{ -+ int err; -+ int increment; -+ -+ if (!PyArg_ParseTuple(args, "i:mutex_set_increment", &increment)) -+ return NULL; -+ -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->mutex_set_increment(self->db_env, increment); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ RETURN_NONE(); -+} -+ -+static PyObject* -+DBEnv_mutex_get_increment(DBEnvObject* self) -+{ -+ int err; -+ u_int32_t increment; -+ -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->mutex_get_increment(self->db_env, &increment); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ -+ return NUMBER_FromLong(increment); -+} -+ -+static PyObject* -+DBEnv_mutex_set_tas_spins(DBEnvObject* self, PyObject* args) -+{ -+ int err; -+ int tas_spins; -+ -+ if (!PyArg_ParseTuple(args, "i:mutex_set_tas_spins", &tas_spins)) -+ return NULL; -+ -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->mutex_set_tas_spins(self->db_env, tas_spins); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ RETURN_NONE(); -+} -+ -+static PyObject* -+DBEnv_mutex_get_tas_spins(DBEnvObject* self) -+{ -+ int err; -+ u_int32_t tas_spins; -+ -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->mutex_get_tas_spins(self->db_env, &tas_spins); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ -+ return NUMBER_FromLong(tas_spins); -+} -+#endif - - static PyObject* - DBEnv_set_data_dir(DBEnvObject* self, PyObject* args) -@@ -4283,6 +4735,47 @@ DBEnv_set_data_dir(DBEnvObject* self, Py - RETURN_NONE(); - } - -+#if (DBVER >= 42) -+static PyObject* -+DBEnv_get_data_dirs(DBEnvObject* self) -+{ -+ int err; -+ PyObject *tuple; -+ PyObject *item; -+ const char **dirpp; -+ int size, i; -+ -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->get_data_dirs(self->db_env, &dirpp); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ -+ /* -+ ** Calculate size. Python C API -+ ** actually allows for tuple resizing, -+ ** but this is simple enough. -+ */ -+ for (size=0; *(dirpp+size) ; size++); -+ -+ tuple = PyTuple_New(size); -+ if (!tuple) -+ return NULL; -+ -+ for (i=0; i<size; i++) { -+ item = PyBytes_FromString (*(dirpp+i)); -+ if (item == NULL) { -+ Py_DECREF(tuple); -+ tuple = NULL; -+ break; -+ } -+ PyTuple_SET_ITEM(tuple, i, item); -+ } -+ return tuple; -+} -+#endif - - static PyObject* - DBEnv_set_lg_bsize(DBEnvObject* self, PyObject* args) -@@ -4501,7 +4994,11 @@ DBEnv_txn_recover(DBEnvObject* self) - DBTxnObject *txn; - #define PREPLIST_LEN 16 - DB_PREPLIST preplist[PREPLIST_LEN]; -+#if (DBVER < 48) - long retp; -+#else -+ u_int32_t retp; -+#endif - - CHECK_ENV_NOT_CLOSED(self); - -@@ -4522,12 +5019,12 @@ DBEnv_txn_recover(DBEnvObject* self) - flags=DB_NEXT; /* Prepare for next loop pass */ - for (i=0; i<retp; i++) { - gid=PyBytes_FromStringAndSize((char *)(preplist[i].gid), -- DB_XIDDATASIZE); -+ DB_GID_SIZE); - if (!gid) { - Py_DECREF(list); - return NULL; - } -- txn=newDBTxnObject(self, NULL, preplist[i].txn, flags); -+ txn=newDBTxnObject(self, NULL, preplist[i].txn, 0); - if (!txn) { - Py_DECREF(list); - Py_DECREF(gid); -@@ -4602,6 +5099,24 @@ DBEnv_txn_checkpoint(DBEnvObject* self, - } - - -+#if (DBVER >= 42) -+static PyObject* -+DBEnv_get_tx_max(DBEnvObject* self) -+{ -+ int err; -+ u_int32_t max; -+ -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->get_tx_max(self->db_env, &max); -+ MYDB_END_ALLOW_THREADS; -+ RETURN_IF_ERR(); -+ return PyLong_FromUnsignedLong(max); -+} -+#endif -+ -+ - static PyObject* - DBEnv_set_tx_max(DBEnvObject* self, PyObject* args) - { -@@ -4611,12 +5126,31 @@ DBEnv_set_tx_max(DBEnvObject* self, PyOb - return NULL; - CHECK_ENV_NOT_CLOSED(self); - -+ MYDB_BEGIN_ALLOW_THREADS; - err = self->db_env->set_tx_max(self->db_env, max); -+ MYDB_END_ALLOW_THREADS; - RETURN_IF_ERR(); - RETURN_NONE(); - } - - -+#if (DBVER >= 42) -+static PyObject* -+DBEnv_get_tx_timestamp(DBEnvObject* self) -+{ -+ int err; -+ time_t timestamp; -+ -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->get_tx_timestamp(self->db_env, ×tamp); -+ MYDB_END_ALLOW_THREADS; -+ RETURN_IF_ERR(); -+ return NUMBER_FromLong(timestamp); -+} -+#endif -+ - static PyObject* - DBEnv_set_tx_timestamp(DBEnvObject* self, PyObject* args) - { -@@ -4628,7 +5162,9 @@ DBEnv_set_tx_timestamp(DBEnvObject* self - return NULL; - CHECK_ENV_NOT_CLOSED(self); - timestamp = (time_t)stamp; -+ MYDB_BEGIN_ALLOW_THREADS; - err = self->db_env->set_tx_timestamp(self->db_env, ×tamp); -+ MYDB_END_ALLOW_THREADS; - RETURN_IF_ERR(); - RETURN_NONE(); - } -@@ -4722,6 +5258,26 @@ DBEnv_lock_put(DBEnvObject* self, PyObje - - #if (DBVER >= 44) - static PyObject* -+DBEnv_fileid_reset(DBEnvObject* self, PyObject* args, PyObject* kwargs) -+{ -+ int err; -+ char *file; -+ u_int32_t flags = 0; -+ static char* kwnames[] = { "file", "flags", NULL}; -+ -+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "z|i:fileid_reset", kwnames, -+ &file, &flags)) -+ return NULL; -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->fileid_reset(self->db_env, file, flags); -+ MYDB_END_ALLOW_THREADS; -+ RETURN_IF_ERR(); -+ RETURN_NONE(); -+} -+ -+static PyObject* - DBEnv_lsn_reset(DBEnvObject* self, PyObject* args, PyObject* kwargs) - { - int err; -@@ -4777,9 +5333,6 @@ DBEnv_log_stat(DBEnvObject* self, PyObje - MAKE_ENTRY(lg_size); - MAKE_ENTRY(record); - #endif --#if (DBVER < 41) -- MAKE_ENTRY(lg_max); --#endif - MAKE_ENTRY(w_mbytes); - MAKE_ENTRY(w_bytes); - MAKE_ENTRY(wc_mbytes); -@@ -4832,13 +5385,8 @@ DBEnv_lock_stat(DBEnvObject* self, PyObj - - #define MAKE_ENTRY(name) _addIntToDict(d, #name, sp->st_##name) - --#if (DBVER < 41) -- MAKE_ENTRY(lastid); --#endif --#if (DBVER >=41) - MAKE_ENTRY(id); - MAKE_ENTRY(cur_maxid); --#endif - MAKE_ENTRY(nmodes); - MAKE_ENTRY(maxlocks); - MAKE_ENTRY(maxlockers); -@@ -4863,10 +5411,8 @@ DBEnv_lock_stat(DBEnvObject* self, PyObj - MAKE_ENTRY(lock_wait); - #endif - MAKE_ENTRY(ndeadlocks); --#if (DBVER >= 41) - MAKE_ENTRY(locktimeout); - MAKE_ENTRY(txntimeout); --#endif - MAKE_ENTRY(nlocktimeouts); - MAKE_ENTRY(ntxntimeouts); - #if (DBVER >= 46) -@@ -4955,6 +5501,31 @@ DBEnv_log_archive(DBEnvObject* self, PyO - } - - -+#if (DBVER >= 43) -+static PyObject* -+DBEnv_txn_stat_print(DBEnvObject* self, PyObject* args, PyObject *kwargs) -+{ -+ int err; -+ int flags=0; -+ static char* kwnames[] = { "flags", NULL }; -+ -+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:stat_print", -+ kwnames, &flags)) -+ { -+ return NULL; -+ } -+ -+ CHECK_ENV_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->txn_stat_print(self->db_env, flags); -+ MYDB_END_ALLOW_THREADS; -+ RETURN_IF_ERR(); -+ RETURN_NONE(); -+} -+#endif -+ -+ - static PyObject* - DBEnv_txn_stat(DBEnvObject* self, PyObject* args) - { -@@ -5047,6 +5618,7 @@ DBEnv_set_private(DBEnvObject* self, PyO - } - - -+#if (DBVER < 48) - static PyObject* - DBEnv_set_rpc_server(DBEnvObject* self, PyObject* args, PyObject* kwargs) - { -@@ -5068,6 +5640,7 @@ DBEnv_set_rpc_server(DBEnvObject* self, - RETURN_IF_ERR(); - RETURN_NONE(); - } -+#endif - - static PyObject* - DBEnv_set_verbose(DBEnvObject* self, PyObject* args) -@@ -5551,79 +6124,248 @@ DBEnv_rep_get_nsites(DBEnvObject* self) - err = self->db_env->rep_get_nsites(self->db_env, &nsites); - MYDB_END_ALLOW_THREADS; - RETURN_IF_ERR(); -- return NUMBER_FromLong(nsites); -+ return NUMBER_FromLong(nsites); -+} -+ -+static PyObject* -+DBEnv_rep_set_priority(DBEnvObject* self, PyObject* args) -+{ -+ int err; -+ int priority; -+ -+ if (!PyArg_ParseTuple(args, "i:rep_set_priority", &priority)) { -+ return NULL; -+ } -+ CHECK_ENV_NOT_CLOSED(self); -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->rep_set_priority(self->db_env, priority); -+ MYDB_END_ALLOW_THREADS; -+ RETURN_IF_ERR(); -+ RETURN_NONE(); -+} -+ -+static PyObject* -+DBEnv_rep_get_priority(DBEnvObject* self) -+{ -+ int err; -+#if (DBVER >= 47) -+ u_int32_t priority; -+#else -+ int priority; -+#endif -+ -+ CHECK_ENV_NOT_CLOSED(self); -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->rep_get_priority(self->db_env, &priority); -+ MYDB_END_ALLOW_THREADS; -+ RETURN_IF_ERR(); -+ return NUMBER_FromLong(priority); -+} -+ -+static PyObject* -+DBEnv_rep_set_timeout(DBEnvObject* self, PyObject* args) -+{ -+ int err; -+ int which, timeout; -+ -+ if (!PyArg_ParseTuple(args, "ii:rep_set_timeout", &which, &timeout)) { -+ return NULL; -+ } -+ CHECK_ENV_NOT_CLOSED(self); -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->rep_set_timeout(self->db_env, which, timeout); -+ MYDB_END_ALLOW_THREADS; -+ RETURN_IF_ERR(); -+ RETURN_NONE(); -+} -+ -+static PyObject* -+DBEnv_rep_get_timeout(DBEnvObject* self, PyObject* args) -+{ -+ int err; -+ int which; -+ u_int32_t timeout; -+ -+ if (!PyArg_ParseTuple(args, "i:rep_get_timeout", &which)) { -+ return NULL; -+ } -+ CHECK_ENV_NOT_CLOSED(self); -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->db_env->rep_get_timeout(self->db_env, which, &timeout); -+ MYDB_END_ALLOW_THREADS; -+ RETURN_IF_ERR(); -+ return NUMBER_FromLong(timeout); - } -+#endif -+ - -+#if (DBVER >= 47) - static PyObject* --DBEnv_rep_set_priority(DBEnvObject* self, PyObject* args) -+DBEnv_rep_set_clockskew(DBEnvObject* self, PyObject* args) - { - int err; -- int priority; -+ unsigned int fast, slow; - -- if (!PyArg_ParseTuple(args, "i:rep_set_priority", &priority)) { -+#if (PY_VERSION_HEX >= 0x02040000) -+ if (!PyArg_ParseTuple(args,"II:rep_set_clockskew", &fast, &slow)) - return NULL; -- } -+#else -+ if (!PyArg_ParseTuple(args,"ii:rep_set_clockskew", &fast, &slow)) -+ return NULL; -+#endif -+ - CHECK_ENV_NOT_CLOSED(self); -+ - MYDB_BEGIN_ALLOW_THREADS; -- err = self->db_env->rep_set_priority(self->db_env, priority); -+ err = self->db_env->rep_set_clockskew(self->db_env, fast, slow); - MYDB_END_ALLOW_THREADS; - RETURN_IF_ERR(); - RETURN_NONE(); - } - - static PyObject* --DBEnv_rep_get_priority(DBEnvObject* self) -+DBEnv_rep_get_clockskew(DBEnvObject* self) - { - int err; --#if (DBVER >= 47) -- u_int32_t priority; --#else -- int priority; --#endif -+ unsigned int fast, slow; - - CHECK_ENV_NOT_CLOSED(self); - MYDB_BEGIN_ALLOW_THREADS; -- err = self->db_env->rep_get_priority(self->db_env, &priority); -+ err = self->db_env->rep_get_clockskew(self->db_env, &fast, &slow); - MYDB_END_ALLOW_THREADS; - RETURN_IF_ERR(); -- return NUMBER_FromLong(priority); -+#if (PY_VERSION_HEX >= 0x02040000) -+ return Py_BuildValue("(II)", fast, slow); -+#else -+ return Py_BuildValue("(ii)", fast, slow); -+#endif - } -+#endif - -+#if (DBVER >= 43) - static PyObject* --DBEnv_rep_set_timeout(DBEnvObject* self, PyObject* args) -+DBEnv_rep_stat_print(DBEnvObject* self, PyObject* args, PyObject *kwargs) - { - int err; -- int which, timeout; -+ int flags=0; -+ static char* kwnames[] = { "flags", NULL }; - -- if (!PyArg_ParseTuple(args, "ii:rep_set_timeout", &which, &timeout)) { -+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:rep_stat_print", -+ kwnames, &flags)) -+ { - return NULL; - } - CHECK_ENV_NOT_CLOSED(self); - MYDB_BEGIN_ALLOW_THREADS; -- err = self->db_env->rep_set_timeout(self->db_env, which, timeout); -+ err = self->db_env->rep_stat_print(self->db_env, flags); - MYDB_END_ALLOW_THREADS; - RETURN_IF_ERR(); - RETURN_NONE(); - } -+#endif - - static PyObject* --DBEnv_rep_get_timeout(DBEnvObject* self, PyObject* args) -+DBEnv_rep_stat(DBEnvObject* self, PyObject* args, PyObject *kwargs) - { - int err; -- int which; -- u_int32_t timeout; -+ int flags=0; -+ DB_REP_STAT *statp; -+ PyObject *stats; -+ static char* kwnames[] = { "flags", NULL }; - -- if (!PyArg_ParseTuple(args, "i:rep_get_timeout", &which)) { -+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:rep_stat", -+ kwnames, &flags)) -+ { - return NULL; - } - CHECK_ENV_NOT_CLOSED(self); - MYDB_BEGIN_ALLOW_THREADS; -- err = self->db_env->rep_get_timeout(self->db_env, which, &timeout); -+ err = self->db_env->rep_stat(self->db_env, &statp, flags); - MYDB_END_ALLOW_THREADS; - RETURN_IF_ERR(); -- return NUMBER_FromLong(timeout); --} -+ -+ stats=PyDict_New(); -+ if (stats == NULL) { -+ free(statp); -+ return NULL; -+ } -+ -+#define MAKE_ENTRY(name) _addIntToDict(stats, #name, statp->st_##name) -+#define MAKE_DB_LSN_ENTRY(name) _addDB_lsnToDict(stats , #name, statp->st_##name) -+ -+#if (DBVER >= 44) -+ MAKE_ENTRY(bulk_fills); -+ MAKE_ENTRY(bulk_overflows); -+ MAKE_ENTRY(bulk_records); -+ MAKE_ENTRY(bulk_transfers); -+ MAKE_ENTRY(client_rerequests); -+ MAKE_ENTRY(client_svc_miss); -+ MAKE_ENTRY(client_svc_req); -+#endif -+ MAKE_ENTRY(dupmasters); -+#if (DBVER >= 43) -+ MAKE_ENTRY(egen); -+ MAKE_ENTRY(election_nvotes); -+ MAKE_ENTRY(startup_complete); -+ MAKE_ENTRY(pg_duplicated); -+ MAKE_ENTRY(pg_records); -+ MAKE_ENTRY(pg_requested); -+ MAKE_ENTRY(next_pg); -+ MAKE_ENTRY(waiting_pg); -+#endif -+ MAKE_ENTRY(election_cur_winner); -+ MAKE_ENTRY(election_gen); -+ MAKE_DB_LSN_ENTRY(election_lsn); -+ MAKE_ENTRY(election_nsites); -+ MAKE_ENTRY(election_priority); -+#if (DBVER >= 44) -+ MAKE_ENTRY(election_sec); -+ MAKE_ENTRY(election_usec); -+#endif -+ MAKE_ENTRY(election_status); -+ MAKE_ENTRY(election_tiebreaker); -+ MAKE_ENTRY(election_votes); -+ MAKE_ENTRY(elections); -+ MAKE_ENTRY(elections_won); -+ MAKE_ENTRY(env_id); -+ MAKE_ENTRY(env_priority); -+ MAKE_ENTRY(gen); -+ MAKE_ENTRY(log_duplicated); -+ MAKE_ENTRY(log_queued); -+ MAKE_ENTRY(log_queued_max); -+ MAKE_ENTRY(log_queued_total); -+ MAKE_ENTRY(log_records); -+ MAKE_ENTRY(log_requested); -+ MAKE_ENTRY(master); -+ MAKE_ENTRY(master_changes); -+#if (DBVER >= 47) -+ MAKE_ENTRY(max_lease_sec); -+ MAKE_ENTRY(max_lease_usec); -+ MAKE_DB_LSN_ENTRY(max_perm_lsn); -+#endif -+ MAKE_ENTRY(msgs_badgen); -+ MAKE_ENTRY(msgs_processed); -+ MAKE_ENTRY(msgs_recover); -+ MAKE_ENTRY(msgs_send_failures); -+ MAKE_ENTRY(msgs_sent); -+ MAKE_ENTRY(newsites); -+ MAKE_DB_LSN_ENTRY(next_lsn); -+ MAKE_ENTRY(nsites); -+ MAKE_ENTRY(nthrottles); -+ MAKE_ENTRY(outdated); -+#if (DBVER >= 46) -+ MAKE_ENTRY(startsync_delayed); - #endif -+ MAKE_ENTRY(status); -+ MAKE_ENTRY(txns_applied); -+ MAKE_DB_LSN_ENTRY(waiting_lsn); -+ -+#undef MAKE_DB_LSN_ENTRY -+#undef MAKE_ENTRY -+ -+ free(statp); -+ return stats; -+} - - /* --------------------------------------------------------------------- */ - /* REPLICATION METHODS: Replication Manager */ -@@ -5947,9 +6689,9 @@ DBTxn_prepare(DBTxnObject* self, PyObjec - if (!PyArg_ParseTuple(args, "s#:prepare", &gid, &gid_size)) - return NULL; - -- if (gid_size != DB_XIDDATASIZE) { -+ if (gid_size != DB_GID_SIZE) { - PyErr_SetString(PyExc_TypeError, -- "gid must be DB_XIDDATASIZE bytes long"); -+ "gid must be DB_GID_SIZE bytes long"); - return NULL; - } - -@@ -6064,6 +6806,76 @@ DBTxn_id(DBTxnObject* self) - return NUMBER_FromLong(id); - } - -+ -+static PyObject* -+DBTxn_set_timeout(DBTxnObject* self, PyObject* args, PyObject* kwargs) -+{ -+ int err; -+ u_int32_t flags=0; -+ u_int32_t timeout = 0; -+ static char* kwnames[] = { "timeout", "flags", NULL }; -+ -+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "ii:set_timeout", kwnames, -+ &timeout, &flags)) { -+ return NULL; -+ } -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->txn->set_timeout(self->txn, (db_timeout_t)timeout, flags); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ RETURN_NONE(); -+} -+ -+ -+#if (DBVER >= 44) -+static PyObject* -+DBTxn_set_name(DBTxnObject* self, PyObject* args) -+{ -+ int err; -+ const char *name; -+ -+ if (!PyArg_ParseTuple(args, "s:set_name", &name)) -+ return NULL; -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->txn->set_name(self->txn, name); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+ RETURN_NONE(); -+} -+#endif -+ -+ -+#if (DBVER >= 44) -+static PyObject* -+DBTxn_get_name(DBTxnObject* self) -+{ -+ int err; -+ const char *name; -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->txn->get_name(self->txn, &name); -+ MYDB_END_ALLOW_THREADS; -+ -+ RETURN_IF_ERR(); -+#if (PY_VERSION_HEX < 0x03000000) -+ if (!name) { -+ return PyString_FromString(""); -+ } -+ return PyString_FromString(name); -+#else -+ if (!name) { -+ return PyUnicode_FromString(""); -+ } -+ return PyUnicode_FromString(name); -+#endif -+} -+#endif -+ -+ - #if (DBVER >= 43) - /* --------------------------------------------------------------------- */ - /* DBSequence methods */ -@@ -6167,12 +6979,12 @@ DBSequence_get_key(DBSequenceObject* sel - } - - static PyObject* --DBSequence_init_value(DBSequenceObject* self, PyObject* args) -+DBSequence_initial_value(DBSequenceObject* self, PyObject* args) - { - int err; - PY_LONG_LONG value; - db_seq_t value2; -- if (!PyArg_ParseTuple(args,"L:init_value", &value)) -+ if (!PyArg_ParseTuple(args,"L:initial_value", &value)) - return NULL; - CHECK_SEQUENCE_NOT_CLOSED(self) - -@@ -6350,6 +7162,29 @@ DBSequence_get_range(DBSequenceObject* s - return Py_BuildValue("(LL)", min, max); - } - -+ -+static PyObject* -+DBSequence_stat_print(DBSequenceObject* self, PyObject* args, PyObject *kwargs) -+{ -+ int err; -+ int flags=0; -+ static char* kwnames[] = { "flags", NULL }; -+ -+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:stat_print", -+ kwnames, &flags)) -+ { -+ return NULL; -+ } -+ -+ CHECK_SEQUENCE_NOT_CLOSED(self); -+ -+ MYDB_BEGIN_ALLOW_THREADS; -+ err = self->sequence->stat_print(self->sequence, flags); -+ MYDB_END_ALLOW_THREADS; -+ RETURN_IF_ERR(); -+ RETURN_NONE(); -+} -+ - static PyObject* - DBSequence_stat(DBSequenceObject* self, PyObject* args, PyObject* kwargs) - { -@@ -6401,11 +7236,18 @@ static PyMethodDef DB_methods[] = { - {"append", (PyCFunction)DB_append, METH_VARARGS|METH_KEYWORDS}, - {"associate", (PyCFunction)DB_associate, METH_VARARGS|METH_KEYWORDS}, - {"close", (PyCFunction)DB_close, METH_VARARGS}, -+#if (DBVER >= 47) -+ {"compact", (PyCFunction)DB_compact, METH_VARARGS|METH_KEYWORDS}, -+#endif - {"consume", (PyCFunction)DB_consume, METH_VARARGS|METH_KEYWORDS}, - {"consume_wait", (PyCFunction)DB_consume_wait, METH_VARARGS|METH_KEYWORDS}, - {"cursor", (PyCFunction)DB_cursor, METH_VARARGS|METH_KEYWORDS}, - {"delete", (PyCFunction)DB_delete, METH_VARARGS|METH_KEYWORDS}, - {"fd", (PyCFunction)DB_fd, METH_NOARGS}, -+#if (DBVER >= 46) -+ {"exists", (PyCFunction)DB_exists, -+ METH_VARARGS|METH_KEYWORDS}, -+#endif - {"get", (PyCFunction)DB_get, METH_VARARGS|METH_KEYWORDS}, - {"pget", (PyCFunction)DB_pget, METH_VARARGS|METH_KEYWORDS}, - {"get_both", (PyCFunction)DB_get_both, METH_VARARGS|METH_KEYWORDS}, -@@ -6424,9 +7266,14 @@ static PyMethodDef DB_methods[] = { - {"set_bt_minkey", (PyCFunction)DB_set_bt_minkey, METH_VARARGS}, - {"set_bt_compare", (PyCFunction)DB_set_bt_compare, METH_O}, - {"set_cachesize", (PyCFunction)DB_set_cachesize, METH_VARARGS}, --#if (DBVER >= 41) -+#if (DBVER >= 42) -+ {"get_cachesize", (PyCFunction)DB_get_cachesize, METH_NOARGS}, -+#endif - {"set_encrypt", (PyCFunction)DB_set_encrypt, METH_VARARGS|METH_KEYWORDS}, -+#if (DBVER >= 42) -+ {"get_encrypt_flags", (PyCFunction)DB_get_encrypt_flags, METH_NOARGS}, - #endif -+ - {"set_flags", (PyCFunction)DB_set_flags, METH_VARARGS}, - {"set_h_ffactor", (PyCFunction)DB_set_h_ffactor, METH_VARARGS}, - {"set_h_nelem", (PyCFunction)DB_set_h_nelem, METH_VARARGS}, -@@ -6451,6 +7298,20 @@ static PyMethodDef DB_methods[] = { - }; - - -+/* We need this to support __contains__() */ -+static PySequenceMethods DB_sequence = { -+ 0, /* sq_length, mapping wins here */ -+ 0, /* sq_concat */ -+ 0, /* sq_repeat */ -+ 0, /* sq_item */ -+ 0, /* sq_slice */ -+ 0, /* sq_ass_item */ -+ 0, /* sq_ass_slice */ -+ (objobjproc)DB_contains, /* sq_contains */ -+ 0, /* sq_inplace_concat */ -+ 0, /* sq_inplace_repeat */ -+}; -+ - static PyMappingMethods DB_mapping = { - DB_length, /*mp_length*/ - (binaryfunc)DB_subscript, /*mp_subscript*/ -@@ -6481,8 +7342,17 @@ static PyMethodDef DBCursor_methods[] = - {"consume", (PyCFunction)DBC_consume, METH_VARARGS|METH_KEYWORDS}, - {"next_dup", (PyCFunction)DBC_next_dup, METH_VARARGS|METH_KEYWORDS}, - {"next_nodup", (PyCFunction)DBC_next_nodup, METH_VARARGS|METH_KEYWORDS}, -+#if (DBVER >= 46) -+ {"prev_dup", (PyCFunction)DBC_prev_dup, -+ METH_VARARGS|METH_KEYWORDS}, -+#endif - {"prev_nodup", (PyCFunction)DBC_prev_nodup, METH_VARARGS|METH_KEYWORDS}, - {"join_item", (PyCFunction)DBC_join_item, METH_VARARGS}, -+#if (DBVER >= 46) -+ {"set_priority", (PyCFunction)DBC_set_priority, -+ METH_VARARGS|METH_KEYWORDS}, -+ {"get_priority", (PyCFunction)DBC_get_priority, METH_NOARGS}, -+#endif - {NULL, NULL} /* sentinel */ - }; - -@@ -6491,57 +7361,94 @@ static PyMethodDef DBEnv_methods[] = { - {"close", (PyCFunction)DBEnv_close, METH_VARARGS}, - {"open", (PyCFunction)DBEnv_open, METH_VARARGS}, - {"remove", (PyCFunction)DBEnv_remove, METH_VARARGS}, --#if (DBVER >= 41) - {"dbremove", (PyCFunction)DBEnv_dbremove, METH_VARARGS|METH_KEYWORDS}, - {"dbrename", (PyCFunction)DBEnv_dbrename, METH_VARARGS|METH_KEYWORDS}, - {"set_encrypt", (PyCFunction)DBEnv_set_encrypt, METH_VARARGS|METH_KEYWORDS}, -+#if (DBVER >= 42) -+ {"get_encrypt_flags", (PyCFunction)DBEnv_get_encrypt_flags, METH_NOARGS}, -+ {"get_timeout", (PyCFunction)DBEnv_get_timeout, -+ METH_VARARGS|METH_KEYWORDS}, -+#endif -+ {"set_timeout", (PyCFunction)DBEnv_set_timeout, METH_VARARGS|METH_KEYWORDS}, -+ {"set_shm_key", (PyCFunction)DBEnv_set_shm_key, METH_VARARGS}, -+#if (DBVER >= 42) -+ {"get_shm_key", (PyCFunction)DBEnv_get_shm_key, METH_NOARGS}, -+#endif -+ {"set_cachesize", (PyCFunction)DBEnv_set_cachesize, METH_VARARGS}, -+#if (DBVER >= 42) -+ {"get_cachesize", (PyCFunction)DBEnv_get_cachesize, METH_NOARGS}, -+#endif -+#if (DBVER >= 44) -+ {"mutex_set_max", (PyCFunction)DBEnv_mutex_set_max, METH_VARARGS}, -+ {"mutex_get_max", (PyCFunction)DBEnv_mutex_get_max, METH_NOARGS}, -+ {"mutex_set_align", (PyCFunction)DBEnv_mutex_set_align, METH_VARARGS}, -+ {"mutex_get_align", (PyCFunction)DBEnv_mutex_get_align, METH_NOARGS}, -+ {"mutex_set_increment", (PyCFunction)DBEnv_mutex_set_increment, -+ METH_VARARGS}, -+ {"mutex_get_increment", (PyCFunction)DBEnv_mutex_get_increment, -+ METH_NOARGS}, -+ {"mutex_set_tas_spins", (PyCFunction)DBEnv_mutex_set_tas_spins, -+ METH_VARARGS}, -+ {"mutex_get_tas_spins", (PyCFunction)DBEnv_mutex_get_tas_spins, -+ METH_NOARGS}, -+#endif -+ {"set_data_dir", (PyCFunction)DBEnv_set_data_dir, METH_VARARGS}, -+#if (DBVER >= 42) -+ {"get_data_dirs", (PyCFunction)DBEnv_get_data_dirs, METH_NOARGS}, - #endif -- {"set_timeout", (PyCFunction)DBEnv_set_timeout, METH_VARARGS|METH_KEYWORDS}, -- {"set_shm_key", (PyCFunction)DBEnv_set_shm_key, METH_VARARGS}, -- {"set_cachesize", (PyCFunction)DBEnv_set_cachesize, METH_VARARGS}, -- {"set_data_dir", (PyCFunction)DBEnv_set_data_dir, METH_VARARGS}, -- {"set_flags", (PyCFunction)DBEnv_set_flags, METH_VARARGS}, -+ {"set_flags", (PyCFunction)DBEnv_set_flags, METH_VARARGS}, - #if (DBVER >= 47) -- {"log_set_config", (PyCFunction)DBEnv_log_set_config, METH_VARARGS}, -+ {"log_set_config", (PyCFunction)DBEnv_log_set_config, METH_VARARGS}, - #endif -- {"set_lg_bsize", (PyCFunction)DBEnv_set_lg_bsize, METH_VARARGS}, -- {"set_lg_dir", (PyCFunction)DBEnv_set_lg_dir, METH_VARARGS}, -- {"set_lg_max", (PyCFunction)DBEnv_set_lg_max, METH_VARARGS}, -+ {"set_lg_bsize", (PyCFunction)DBEnv_set_lg_bsize, METH_VARARGS}, -+ {"set_lg_dir", (PyCFunction)DBEnv_set_lg_dir, METH_VARARGS}, -+ {"set_lg_max", (PyCFunction)DBEnv_set_lg_max, METH_VARARGS}, - #if (DBVER >= 42) -- {"get_lg_max", (PyCFunction)DBEnv_get_lg_max, METH_NOARGS}, -+ {"get_lg_max", (PyCFunction)DBEnv_get_lg_max, METH_NOARGS}, - #endif - {"set_lg_regionmax",(PyCFunction)DBEnv_set_lg_regionmax, METH_VARARGS}, -- {"set_lk_detect", (PyCFunction)DBEnv_set_lk_detect, METH_VARARGS}, -+ {"set_lk_detect", (PyCFunction)DBEnv_set_lk_detect, METH_VARARGS}, - #if (DBVER < 45) -- {"set_lk_max", (PyCFunction)DBEnv_set_lk_max, METH_VARARGS}, -+ {"set_lk_max", (PyCFunction)DBEnv_set_lk_max, METH_VARARGS}, - #endif - {"set_lk_max_locks", (PyCFunction)DBEnv_set_lk_max_locks, METH_VARARGS}, - {"set_lk_max_lockers", (PyCFunction)DBEnv_set_lk_max_lockers, METH_VARARGS}, - {"set_lk_max_objects", (PyCFunction)DBEnv_set_lk_max_objects, METH_VARARGS}, -- {"set_mp_mmapsize", (PyCFunction)DBEnv_set_mp_mmapsize, METH_VARARGS}, -- {"set_tmp_dir", (PyCFunction)DBEnv_set_tmp_dir, METH_VARARGS}, -- {"txn_begin", (PyCFunction)DBEnv_txn_begin, METH_VARARGS|METH_KEYWORDS}, -- {"txn_checkpoint", (PyCFunction)DBEnv_txn_checkpoint, METH_VARARGS}, -- {"txn_stat", (PyCFunction)DBEnv_txn_stat, METH_VARARGS}, -- {"set_tx_max", (PyCFunction)DBEnv_set_tx_max, METH_VARARGS}, -+ {"set_mp_mmapsize", (PyCFunction)DBEnv_set_mp_mmapsize, METH_VARARGS}, -+ {"set_tmp_dir", (PyCFunction)DBEnv_set_tmp_dir, METH_VARARGS}, -+ {"txn_begin", (PyCFunction)DBEnv_txn_begin, METH_VARARGS|METH_KEYWORDS}, -+ {"txn_checkpoint", (PyCFunction)DBEnv_txn_checkpoint, METH_VARARGS}, -+ {"txn_stat", (PyCFunction)DBEnv_txn_stat, METH_VARARGS}, -+#if (DBVER >= 43) -+ {"txn_stat_print", (PyCFunction)DBEnv_txn_stat_print, -+ METH_VARARGS|METH_KEYWORDS}, -+#endif -+#if (DBVER >= 42) -+ {"get_tx_max", (PyCFunction)DBEnv_get_tx_max, METH_NOARGS}, -+ {"get_tx_timestamp", (PyCFunction)DBEnv_get_tx_timestamp, METH_NOARGS}, -+#endif -+ {"set_tx_max", (PyCFunction)DBEnv_set_tx_max, METH_VARARGS}, - {"set_tx_timestamp", (PyCFunction)DBEnv_set_tx_timestamp, METH_VARARGS}, -- {"lock_detect", (PyCFunction)DBEnv_lock_detect, METH_VARARGS}, -- {"lock_get", (PyCFunction)DBEnv_lock_get, METH_VARARGS}, -- {"lock_id", (PyCFunction)DBEnv_lock_id, METH_NOARGS}, -- {"lock_id_free", (PyCFunction)DBEnv_lock_id_free, METH_VARARGS}, -- {"lock_put", (PyCFunction)DBEnv_lock_put, METH_VARARGS}, -- {"lock_stat", (PyCFunction)DBEnv_lock_stat, METH_VARARGS}, -- {"log_archive", (PyCFunction)DBEnv_log_archive, METH_VARARGS}, -- {"log_flush", (PyCFunction)DBEnv_log_flush, METH_NOARGS}, -- {"log_stat", (PyCFunction)DBEnv_log_stat, METH_VARARGS}, -+ {"lock_detect", (PyCFunction)DBEnv_lock_detect, METH_VARARGS}, -+ {"lock_get", (PyCFunction)DBEnv_lock_get, METH_VARARGS}, -+ {"lock_id", (PyCFunction)DBEnv_lock_id, METH_NOARGS}, -+ {"lock_id_free", (PyCFunction)DBEnv_lock_id_free, METH_VARARGS}, -+ {"lock_put", (PyCFunction)DBEnv_lock_put, METH_VARARGS}, -+ {"lock_stat", (PyCFunction)DBEnv_lock_stat, METH_VARARGS}, -+ {"log_archive", (PyCFunction)DBEnv_log_archive, METH_VARARGS}, -+ {"log_flush", (PyCFunction)DBEnv_log_flush, METH_NOARGS}, -+ {"log_stat", (PyCFunction)DBEnv_log_stat, METH_VARARGS}, - #if (DBVER >= 44) -- {"lsn_reset", (PyCFunction)DBEnv_lsn_reset, METH_VARARGS|METH_KEYWORDS}, -+ {"fileid_reset", (PyCFunction)DBEnv_fileid_reset, METH_VARARGS|METH_KEYWORDS}, -+ {"lsn_reset", (PyCFunction)DBEnv_lsn_reset, METH_VARARGS|METH_KEYWORDS}, - #endif - {"set_get_returns_none",(PyCFunction)DBEnv_set_get_returns_none, METH_VARARGS}, -- {"txn_recover", (PyCFunction)DBEnv_txn_recover, METH_NOARGS}, -+ {"txn_recover", (PyCFunction)DBEnv_txn_recover, METH_NOARGS}, -+#if (DBVER < 48) - {"set_rpc_server", (PyCFunction)DBEnv_set_rpc_server, - METH_VARARGS||METH_KEYWORDS}, -- {"set_verbose", (PyCFunction)DBEnv_set_verbose, METH_VARARGS}, -+#endif -+ {"set_verbose", (PyCFunction)DBEnv_set_verbose, METH_VARARGS}, - #if (DBVER >= 42) - {"get_verbose", (PyCFunction)DBEnv_get_verbose, METH_VARARGS}, - #endif -@@ -6579,6 +7486,17 @@ static PyMethodDef DBEnv_methods[] = { - {"rep_set_timeout", (PyCFunction)DBEnv_rep_set_timeout, METH_VARARGS}, - {"rep_get_timeout", (PyCFunction)DBEnv_rep_get_timeout, METH_VARARGS}, - #endif -+#if (DBVER >= 47) -+ {"rep_set_clockskew", (PyCFunction)DBEnv_rep_set_clockskew, METH_VARARGS}, -+ {"rep_get_clockskew", (PyCFunction)DBEnv_rep_get_clockskew, METH_VARARGS}, -+#endif -+ {"rep_stat", (PyCFunction)DBEnv_rep_stat, -+ METH_VARARGS|METH_KEYWORDS}, -+#if (DBVER >= 43) -+ {"rep_stat_print", (PyCFunction)DBEnv_rep_stat_print, -+ METH_VARARGS|METH_KEYWORDS}, -+#endif -+ - #if (DBVER >= 45) - {"repmgr_start", (PyCFunction)DBEnv_repmgr_start, - METH_VARARGS|METH_KEYWORDS}, -@@ -6609,6 +7527,12 @@ static PyMethodDef DBTxn_methods[] = { - {"discard", (PyCFunction)DBTxn_discard, METH_NOARGS}, - {"abort", (PyCFunction)DBTxn_abort, METH_NOARGS}, - {"id", (PyCFunction)DBTxn_id, METH_NOARGS}, -+ {"set_timeout", (PyCFunction)DBTxn_set_timeout, -+ METH_VARARGS|METH_KEYWORDS}, -+#if (DBVER >= 44) -+ {"set_name", (PyCFunction)DBTxn_set_name, METH_VARARGS}, -+ {"get_name", (PyCFunction)DBTxn_get_name, METH_NOARGS}, -+#endif - {NULL, NULL} /* sentinel */ - }; - -@@ -6619,7 +7543,7 @@ static PyMethodDef DBSequence_methods[] - {"get", (PyCFunction)DBSequence_get, METH_VARARGS|METH_KEYWORDS}, - {"get_dbp", (PyCFunction)DBSequence_get_dbp, METH_NOARGS}, - {"get_key", (PyCFunction)DBSequence_get_key, METH_NOARGS}, -- {"init_value", (PyCFunction)DBSequence_init_value, METH_VARARGS}, -+ {"initial_value", (PyCFunction)DBSequence_initial_value, METH_VARARGS}, - {"open", (PyCFunction)DBSequence_open, METH_VARARGS|METH_KEYWORDS}, - {"remove", (PyCFunction)DBSequence_remove, METH_VARARGS|METH_KEYWORDS}, - {"set_cachesize", (PyCFunction)DBSequence_set_cachesize, METH_VARARGS}, -@@ -6629,6 +7553,8 @@ static PyMethodDef DBSequence_methods[] - {"set_range", (PyCFunction)DBSequence_set_range, METH_VARARGS}, - {"get_range", (PyCFunction)DBSequence_get_range, METH_NOARGS}, - {"stat", (PyCFunction)DBSequence_stat, METH_VARARGS|METH_KEYWORDS}, -+ {"stat_print", (PyCFunction)DBSequence_stat_print, -+ METH_VARARGS|METH_KEYWORDS}, - {NULL, NULL} /* sentinel */ - }; - #endif -@@ -6677,7 +7603,7 @@ statichere PyTypeObject DB_Type = { - 0, /*tp_compare*/ - 0, /*tp_repr*/ - 0, /*tp_as_number*/ -- 0, /*tp_as_sequence*/ -+ &DB_sequence,/*tp_as_sequence*/ - &DB_mapping,/*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /* tp_call */ -@@ -7029,10 +7955,21 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - { - PyObject* m; - PyObject* d; -- PyObject* pybsddb_version_s = PyBytes_FromString( PY_BSDDB_VERSION ); -- PyObject* db_version_s = PyBytes_FromString( DB_VERSION_STRING ); -- PyObject* cvsid_s = PyBytes_FromString( rcs_id ); - PyObject* py_api; -+ PyObject* pybsddb_version_s; -+ PyObject* db_version_s; -+ PyObject* cvsid_s; -+ -+#if (PY_VERSION_HEX < 0x03000000) -+ pybsddb_version_s = PyString_FromString(PY_BSDDB_VERSION); -+ db_version_s = PyString_FromString(DB_VERSION_STRING); -+ cvsid_s = PyString_FromString(rcs_id); -+#else -+ /* This data should be ascii, so UTF-8 conversion is fine */ -+ pybsddb_version_s = PyUnicode_FromString(PY_BSDDB_VERSION); -+ db_version_s = PyUnicode_FromString(DB_VERSION_STRING); -+ cvsid_s = PyUnicode_FromString(rcs_id); -+#endif - - /* Initialize object types */ - if ((PyType_Ready(&DB_Type) < 0) -@@ -7089,6 +8026,7 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - ADD_INT(d, DB_MAX_PAGES); - ADD_INT(d, DB_MAX_RECORDS); - -+#if (DBVER < 48) - #if (DBVER >= 42) - ADD_INT(d, DB_RPCCLIENT); - #else -@@ -7096,7 +8034,11 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - /* allow apps to be written using DB_RPCCLIENT on older Berkeley DB */ - _addIntToDict(d, "DB_RPCCLIENT", DB_CLIENT); - #endif -+#endif -+ -+#if (DBVER < 48) - ADD_INT(d, DB_XA_CREATE); -+#endif - - ADD_INT(d, DB_CREATE); - ADD_INT(d, DB_NOMMAP); -@@ -7113,7 +8055,13 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - ADD_INT(d, DB_INIT_TXN); - ADD_INT(d, DB_JOINENV); - -+#if (DBVER >= 48) -+ ADD_INT(d, DB_GID_SIZE); -+#else - ADD_INT(d, DB_XIDDATASIZE); -+ /* Allow new code to work in old BDB releases */ -+ _addIntToDict(d, "DB_GID_SIZE", DB_XIDDATASIZE); -+#endif - - ADD_INT(d, DB_RECOVER); - ADD_INT(d, DB_RECOVER_FATAL); -@@ -7128,6 +8076,10 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - ADD_INT(d, DB_TXN_SYNC); - ADD_INT(d, DB_TXN_NOWAIT); - -+#if (DBVER >= 46) -+ ADD_INT(d, DB_TXN_WAIT); -+#endif -+ - ADD_INT(d, DB_EXCL); - ADD_INT(d, DB_FCNTL_LOCKING); - ADD_INT(d, DB_ODDFILESIZE); -@@ -7233,12 +8185,6 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - ADD_INT(d, DB_CACHED_COUNTS); - #endif - --#if (DBVER >= 41) -- _addIntToDict(d, "DB_CHECKPOINT", 0); --#else -- ADD_INT(d, DB_CHECKPOINT); -- ADD_INT(d, DB_CURLSN); --#endif - #if (DBVER <= 41) - ADD_INT(d, DB_COMMIT); - #endif -@@ -7249,6 +8195,7 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - ADD_INT(d, DB_FIRST); - ADD_INT(d, DB_FLUSH); - ADD_INT(d, DB_GET_BOTH); -+ ADD_INT(d, DB_GET_BOTH_RANGE); - ADD_INT(d, DB_GET_RECNO); - ADD_INT(d, DB_JOIN_ITEM); - ADD_INT(d, DB_KEYFIRST); -@@ -7263,6 +8210,9 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - ADD_INT(d, DB_POSITION); - ADD_INT(d, DB_PREV); - ADD_INT(d, DB_PREV_NODUP); -+#if (DBVER >= 46) -+ ADD_INT(d, DB_PREV_DUP); -+#endif - #if (DBVER < 45) - ADD_INT(d, DB_RECORDCOUNT); - #endif -@@ -7278,17 +8228,18 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - ADD_INT(d, DB_MULTIPLE_KEY); - - #if (DBVER >= 44) -+ ADD_INT(d, DB_IMMUTABLE_KEY); - ADD_INT(d, DB_READ_UNCOMMITTED); /* replaces DB_DIRTY_READ in 4.4 */ - ADD_INT(d, DB_READ_COMMITTED); - #endif - -+#if (DBVER >= 44) -+ ADD_INT(d, DB_FREELIST_ONLY); -+ ADD_INT(d, DB_FREE_SPACE); -+#endif -+ - ADD_INT(d, DB_DONOTINDEX); - --#if (DBVER >= 41) -- _addIntToDict(d, "DB_INCOMPLETE", 0); --#else -- ADD_INT(d, DB_INCOMPLETE); --#endif - ADD_INT(d, DB_KEYEMPTY); - ADD_INT(d, DB_KEYEXIST); - ADD_INT(d, DB_LOCK_DEADLOCK); -@@ -7309,14 +8260,15 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - ADD_INT(d, DB_PANIC_ENVIRONMENT); - ADD_INT(d, DB_NOPANIC); - --#if (DBVER >= 41) - ADD_INT(d, DB_OVERWRITE); --#endif - --#ifdef DB_REGISTER -+#if (DBVER >= 44) - ADD_INT(d, DB_REGISTER); - #endif - -+ ADD_INT(d, DB_EID_INVALID); -+ ADD_INT(d, DB_EID_BROADCAST); -+ - #if (DBVER >= 42) - ADD_INT(d, DB_TIME_NOTGRANTED); - ADD_INT(d, DB_TXN_NOT_DURABLE); -@@ -7389,6 +8341,32 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - - ADD_INT(d, DB_REP_MASTER); - ADD_INT(d, DB_REP_CLIENT); -+ -+ ADD_INT(d, DB_REP_PERMANENT); -+ -+#if (DBVER >= 44) -+ ADD_INT(d, DB_REP_CONF_NOAUTOINIT); -+ ADD_INT(d, DB_REP_CONF_DELAYCLIENT); -+ ADD_INT(d, DB_REP_CONF_BULK); -+ ADD_INT(d, DB_REP_CONF_NOWAIT); -+ ADD_INT(d, DB_REP_ANYWHERE); -+ ADD_INT(d, DB_REP_REREQUEST); -+#endif -+ -+#if (DBVER >= 42) -+ ADD_INT(d, DB_REP_NOBUFFER); -+#endif -+ -+#if (DBVER >= 46) -+ ADD_INT(d, DB_REP_LEASE_EXPIRED); -+ ADD_INT(d, DB_IGNORE_LEASE); -+#endif -+ -+#if (DBVER >= 47) -+ ADD_INT(d, DB_REP_CONF_LEASE); -+ ADD_INT(d, DB_REPMGR_CONF_2SITE_STRICT); -+#endif -+ - #if (DBVER >= 45) - ADD_INT(d, DB_REP_ELECTION); - -@@ -7400,6 +8378,11 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - #if (DBVER >= 46) - ADD_INT(d, DB_REP_CHECKPOINT_DELAY); - ADD_INT(d, DB_REP_FULL_ELECTION_TIMEOUT); -+ ADD_INT(d, DB_REP_LEASE_TIMEOUT); -+#endif -+#if (DBVER >= 47) -+ ADD_INT(d, DB_REP_HEARTBEAT_MONITOR); -+ ADD_INT(d, DB_REP_HEARTBEAT_SEND); - #endif - - #if (DBVER >= 45) -@@ -7412,7 +8395,6 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - ADD_INT(d, DB_REPMGR_ACKS_QUORUM); - ADD_INT(d, DB_REPMGR_CONNECTED); - ADD_INT(d, DB_REPMGR_DISCONNECTED); -- ADD_INT(d, DB_STAT_CLEAR); - ADD_INT(d, DB_STAT_ALL); - #endif - -@@ -7428,12 +8410,16 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - ADD_INT(d, DB_DSYNC_LOG); - #endif - --#if (DBVER >= 41) - ADD_INT(d, DB_ENCRYPT_AES); - ADD_INT(d, DB_AUTO_COMMIT); --#else -- /* allow Berkeley DB 4.1 aware apps to run on older versions */ -- _addIntToDict(d, "DB_AUTO_COMMIT", 0); -+ ADD_INT(d, DB_PRIORITY_VERY_LOW); -+ ADD_INT(d, DB_PRIORITY_LOW); -+ ADD_INT(d, DB_PRIORITY_DEFAULT); -+ ADD_INT(d, DB_PRIORITY_HIGH); -+ ADD_INT(d, DB_PRIORITY_VERY_HIGH); -+ -+#if (DBVER >= 46) -+ ADD_INT(d, DB_PRIORITY_UNCHANGED); - #endif - - ADD_INT(d, EINVAL); -@@ -7497,10 +8483,6 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - } - #endif - -- --#if !INCOMPLETE_IS_WARNING -- MAKE_EX(DBIncompleteError); --#endif - MAKE_EX(DBCursorClosedError); - MAKE_EX(DBKeyEmptyError); - MAKE_EX(DBKeyExistError); -@@ -7528,9 +8510,16 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - #if (DBVER >= 42) - MAKE_EX(DBRepHandleDeadError); - #endif -+#if (DBVER >= 44) -+ MAKE_EX(DBRepLockoutError); -+#endif - - MAKE_EX(DBRepUnavailError); - -+#if (DBVER >= 46) -+ MAKE_EX(DBRepLeaseExpiredError); -+#endif -+ - #undef MAKE_EX - - /* Initiliase the C API structure and add it to the module */ -@@ -7544,7 +8533,24 @@ PyMODINIT_FUNC PyInit__bsddb(void) / - #endif - bsddb_api.makeDBError = makeDBError; - -+ /* -+ ** Capsules exist from Python 3.1, but I -+ ** don't want to break the API compatibility -+ ** for already published Python versions. -+ */ -+#if (PY_VERSION_HEX < 0x03020000) - py_api = PyCObject_FromVoidPtr((void*)&bsddb_api, NULL); -+#else -+ { -+ char py_api_name[250]; -+ -+ strcpy(py_api_name, _bsddbModuleName); -+ strcat(py_api_name, ".api"); -+ -+ py_api = PyCapsule_New((void*)&bsddb_api, py_api_name, NULL); -+ } -+#endif -+ - PyDict_SetItemString(d, "api", py_api); - Py_DECREF(py_api); - -diff -Nupr Python-2.6.4.orig/Modules/bsddb.h Python-2.6.4/Modules/bsddb.h ---- Python-2.6.4.orig/Modules/bsddb.h 2008-09-28 19:24:19.000000000 -0400 -+++ Python-2.6.4/Modules/bsddb.h 2009-12-04 07:34:56.000000000 -0500 -@@ -105,7 +105,7 @@ - #error "eek! DBVER can't handle minor versions > 9" - #endif - --#define PY_BSDDB_VERSION "4.7.3" -+#define PY_BSDDB_VERSION "4.8.1" - - /* Python object definitions */ - -@@ -220,6 +220,7 @@ typedef struct DBSequenceObject { - /* To access the structure from an external module, use code like the - following (error checking missed out for clarity): - -+ // If you are using Python 3.2: - BSDDB_api* bsddb_api; - PyObject* mod; - PyObject* cobj; -@@ -231,6 +232,15 @@ typedef struct DBSequenceObject { - Py_DECREF(cobj); - Py_DECREF(mod); - -+ -+ // If you are using Python 3.2 or up: -+ BSDDB_api* bsddb_api; -+ -+ // Use "bsddb3._pybsddb.api" if you're using -+ // the standalone pybsddb add-on. -+ bsddb_api = (void **)PyCapsule_Import("bsddb._bsddb.api", 1); -+ -+ - The structure's members must not be changed. - */ - -@@ -247,7 +257,6 @@ typedef struct { - - /* Functions */ - int (*makeDBError)(int err); -- - } BSDDB_api; - - diff --git a/pkgs/core/python/python.nm b/pkgs/core/python/python.nm index c8afcfc..ae8e78b 100644 --- a/pkgs/core/python/python.nm +++ b/pkgs/core/python/python.nm @@ -25,7 +25,7 @@ include $(PKGROOT)/Include
PKG_NAME = Python -PKG_VER = 2.6.4 +PKG_VER = 2.6.5 PKG_REL = 0
PKG_MAINTAINER = diff --git a/tools/quality-agent.d/002-bad-symlinks b/tools/quality-agent.d/002-bad-symlinks index f42dbde..f3217fd 100755 --- a/tools/quality-agent.d/002-bad-symlinks +++ b/tools/quality-agent.d/002-bad-symlinks @@ -14,6 +14,10 @@ for link in $(find ${BUILDROOT} -type l); do log " absolute symlink: ${link}" failed=1 fi + if [ ! -e "${link%/*}/${destination}" ]; then + log " not existant destination: ${link} -> ${destination}" + failed=1 + fi done
exit ${failed} diff --git a/tools/quality-agent.d/050-root-links-to-usr b/tools/quality-agent.d/050-root-links-to-usr index c514136..0027ff3 100755 --- a/tools/quality-agent.d/050-root-links-to-usr +++ b/tools/quality-agent.d/050-root-links-to-usr @@ -7,6 +7,13 @@ log "Check for binaries in /bin or /sbin that link to /usr/..." for file in ${BUILDROOT}/{bin,lib,sbin}/*; do [ -f "${file}" ] || continue log " ${file}" + + interpreter=$(get_interpreter ${file}) + if [ ! -e "${BUILDROOT}${interpreter}" ]; then + log " SKIPPED because interpreter is not available" + continue + fi + libs=$(ldd ${file}) if grep -q /usr/lib <<<${libs}; then log "ERROR: ${file} links to libs in /usr/lib..." diff --git a/tools/quality-agent.d/095-directory-layout b/tools/quality-agent.d/095-directory-layout index 428f0b8..cf1f0bd 100755 --- a/tools/quality-agent.d/095-directory-layout +++ b/tools/quality-agent.d/095-directory-layout @@ -19,6 +19,8 @@ log " Checking for directories that should not be there" check /etc/init.d check /etc/rc.d check /lib/pkgconfig +check /usr/etc +check /usr/local check /usr/man check /usr/var
diff --git a/tools/quality-agent.d/qa-include b/tools/quality-agent.d/qa-include index 43e0e5b..4504623 100644 --- a/tools/quality-agent.d/qa-include +++ b/tools/quality-agent.d/qa-include @@ -8,3 +8,10 @@ if [ -z "${BUILDROOT}" ]; then echo "${0##*/}: ERROR: BUILDROOT is not set." >&2 exit 1 fi + +get_interpreter() { + local file=${1} + + readelf -l ${file} | grep "program interpreter" | \ + tr -d "]" | awk '{ print $NF }' +}
hooks/post-receive -- IPFire 3.x development tree