{"diffoscope-json-version": 1, "source1": "/srv/reproducible-results/rbuild-debian/r-b-build.nKylx3QQ/b1/sqlalchemy_2.0.32+ds1-1_armhf.changes", "source2": "/srv/reproducible-results/rbuild-debian/r-b-build.nKylx3QQ/b2/sqlalchemy_2.0.32+ds1-1_armhf.changes", "unified_diff": null, "details": [{"source1": "Files", "source2": "Files", "unified_diff": "@@ -1,5 +1,5 @@\n \n- 0edf423a1d39fac3689b73a709c826ea 3956520 doc optional python-sqlalchemy-doc_2.0.32+ds1-1_all.deb\n+ 3c52278352fb66b47af3ed26d609928f 3956224 doc optional python-sqlalchemy-doc_2.0.32+ds1-1_all.deb\n 9fabb2a962b8bd7da2eceef5e38e1c7f 902280 debug optional python3-sqlalchemy-ext-dbgsym_2.0.32+ds1-1_armhf.deb\n 2b30c02f46036b453f048fc47e70fa6d 123428 python optional python3-sqlalchemy-ext_2.0.32+ds1-1_armhf.deb\n 0955e7f12a0b73c1ab8406c88fbab7d2 1196068 python optional python3-sqlalchemy_2.0.32+ds1-1_all.deb\n"}, {"source1": "python-sqlalchemy-doc_2.0.32+ds1-1_all.deb", "source2": "python-sqlalchemy-doc_2.0.32+ds1-1_all.deb", "unified_diff": null, "details": [{"source1": "file list", "source2": "file list", "unified_diff": "@@ -1,3 +1,3 @@\n -rw-r--r-- 0 0 0 4 2024-08-23 07:52:58.000000 debian-binary\n--rw-r--r-- 0 0 0 13920 2024-08-23 07:52:58.000000 control.tar.xz\n--rw-r--r-- 0 0 0 3942408 2024-08-23 07:52:58.000000 data.tar.xz\n+-rw-r--r-- 0 0 0 13928 2024-08-23 07:52:58.000000 control.tar.xz\n+-rw-r--r-- 0 0 0 3942104 2024-08-23 07:52:58.000000 data.tar.xz\n"}, {"source1": "control.tar.xz", "source2": "control.tar.xz", "unified_diff": null, "details": [{"source1": "control.tar", "source2": "control.tar", "unified_diff": null, "details": [{"source1": "./md5sums", "source2": "./md5sums", "unified_diff": null, "details": [{"source1": "./md5sums", "source2": "./md5sums", "comments": ["Files differ"], "unified_diff": null}]}]}]}, {"source1": "data.tar.xz", "source2": "data.tar.xz", "unified_diff": null, "details": [{"source1": "data.tar", "source2": "data.tar", "unified_diff": null, "details": [{"source1": "./usr/share/doc/python-sqlalchemy-doc/html/changelog/changelog_10.html", "source2": "./usr/share/doc/python-sqlalchemy-doc/html/changelog/changelog_10.html", "unified_diff": "@@ -592,15 +592,15 @@\n
\n

1.0 Changelog\u00b6

\n
\n

1.0.19\u00b6

\n Released: August 3, 2017
\n

oracle\u00b6

\n
    \n-
  • [oracle] [performance] [bug] [py2k] \u00b6

    Fixed performance regression caused by the fix for #3937 where\n+

  • [oracle] [bug] [performance] [py2k] \u00b6

    Fixed performance regression caused by the fix for #3937 where\n cx_Oracle as of version 5.3 dropped the .UNICODE symbol from its\n namespace, which was interpreted as cx_Oracle\u2019s \u201cWITH_UNICODE\u201d mode being\n turned on unconditionally, which invokes functions on the SQLAlchemy\n side which convert all strings to unicode unconditionally and causing\n a performance impact. In fact, per cx_Oracle\u2019s author the\n \u201cWITH_UNICODE\u201d mode has been removed entirely as of 5.1, so the expensive unicode\n conversion functions are no longer necessary and are disabled if\n", "details": [{"source1": "html2text {}", "source2": "html2text {}", "unified_diff": "@@ -318,15 +318,15 @@\n # _\bo_\br_\ba_\bc_\bl_\be\n # _\bt_\be_\bs_\bt_\bs\n # _\bm_\bi_\bs_\bc\n *\b**\b**\b**\b**\b**\b* 1\b1.\b.0\b0 C\bCh\bha\ban\bng\bge\bel\blo\bog\bg_\b?\b\u00b6 *\b**\b**\b**\b**\b**\b*\n *\b**\b**\b**\b**\b* 1\b1.\b.0\b0.\b.1\b19\b9_\b?\b\u00b6 *\b**\b**\b**\b**\b*\n Released: August 3, 2017\n *\b**\b**\b**\b* o\bor\bra\bac\bcl\ble\be_\b?\b\u00b6 *\b**\b**\b**\b*\n- * [\b[o\bor\bra\bac\bcl\ble\be]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[b\bbu\bug\bg]\b] [\b[p\bpy\by2\b2k\bk]\b] _\b\u00b6\n+ * [\b[o\bor\bra\bac\bcl\ble\be]\b] [\b[b\bbu\bug\bg]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[p\bpy\by2\b2k\bk]\b] _\b\u00b6\n Fixed performance regression caused by the fix for _\b#_\b3_\b9_\b3_\b7 where cx_Oracle\n as of version 5.3 dropped the .UNICODE symbol from its namespace, which\n was interpreted as cx_Oracle\u2019s \u201cWITH_UNICODE\u201d mode being turned on\n unconditionally, which invokes functions on the SQLAlchemy side which\n convert all strings to unicode unconditionally and causing a performance\n impact. In fact, per cx_Oracle\u2019s author the \u201cWITH_UNICODE\u201d mode has been\n removed entirely as of 5.1, so the expensive unicode conversion functions\n"}]}, {"source1": "./usr/share/doc/python-sqlalchemy-doc/html/changelog/changelog_11.html", "source2": "./usr/share/doc/python-sqlalchemy-doc/html/changelog/changelog_11.html", "unified_diff": "@@ -875,15 +875,15 @@\n

\n
\n
\n

1.1.13\u00b6

\n Released: August 3, 2017
\n

oracle\u00b6

\n
    \n-
  • [oracle] [performance] [bug] [py2k] \u00b6

    Fixed performance regression caused by the fix for #3937 where\n+

  • [oracle] [bug] [performance] [py2k] \u00b6

    Fixed performance regression caused by the fix for #3937 where\n cx_Oracle as of version 5.3 dropped the .UNICODE symbol from its\n namespace, which was interpreted as cx_Oracle\u2019s \u201cWITH_UNICODE\u201d mode being\n turned on unconditionally, which invokes functions on the SQLAlchemy\n side which convert all strings to unicode unconditionally and causing\n a performance impact. In fact, per cx_Oracle\u2019s author the\n \u201cWITH_UNICODE\u201d mode has been removed entirely as of 5.1, so the expensive unicode\n conversion functions are no longer necessary and are disabled if\n", "details": [{"source1": "html2text {}", "source2": "html2text {}", "unified_diff": "@@ -496,15 +496,15 @@\n the same PRECEDING or FOLLOWING keywords in a range by allowing for the\n left side of the range to be positive and for the right to be negative,\n e.g. (1, 3) is \u201c1 FOLLOWING AND 3 FOLLOWING\u201d.\n References: _\b#_\b4_\b0_\b5_\b3\n *\b**\b**\b**\b**\b* 1\b1.\b.1\b1.\b.1\b13\b3_\b?\b\u00b6 *\b**\b**\b**\b**\b*\n Released: August 3, 2017\n *\b**\b**\b**\b* o\bor\bra\bac\bcl\ble\be_\b?\b\u00b6 *\b**\b**\b**\b*\n- * [\b[o\bor\bra\bac\bcl\ble\be]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[b\bbu\bug\bg]\b] [\b[p\bpy\by2\b2k\bk]\b] _\b\u00b6\n+ * [\b[o\bor\bra\bac\bcl\ble\be]\b] [\b[b\bbu\bug\bg]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[p\bpy\by2\b2k\bk]\b] _\b\u00b6\n Fixed performance regression caused by the fix for _\b#_\b3_\b9_\b3_\b7 where cx_Oracle\n as of version 5.3 dropped the .UNICODE symbol from its namespace, which\n was interpreted as cx_Oracle\u2019s \u201cWITH_UNICODE\u201d mode being turned on\n unconditionally, which invokes functions on the SQLAlchemy side which\n convert all strings to unicode unconditionally and causing a performance\n impact. In fact, per cx_Oracle\u2019s author the \u201cWITH_UNICODE\u201d mode has been\n removed entirely as of 5.1, so the expensive unicode conversion functions\n"}]}, {"source1": "./usr/share/doc/python-sqlalchemy-doc/html/changelog/changelog_12.html", "source2": "./usr/share/doc/python-sqlalchemy-doc/html/changelog/changelog_12.html", "unified_diff": "@@ -2977,15 +2977,15 @@\n

    \n
  • \n
\n
\n
\n

oracle\u00b6

\n
    \n-
  • [oracle] [performance] [bug] [py2k] \u00b6

    Fixed performance regression caused by the fix for #3937 where\n+

  • [oracle] [bug] [performance] [py2k] \u00b6

    Fixed performance regression caused by the fix for #3937 where\n cx_Oracle as of version 5.3 dropped the .UNICODE symbol from its\n namespace, which was interpreted as cx_Oracle\u2019s \u201cWITH_UNICODE\u201d mode being\n turned on unconditionally, which invokes functions on the SQLAlchemy\n side which convert all strings to unicode unconditionally and causing\n a performance impact. In fact, per cx_Oracle\u2019s author the\n \u201cWITH_UNICODE\u201d mode has been removed entirely as of 5.1, so the expensive unicode\n conversion functions are no longer necessary and are disabled if\n", "details": [{"source1": "html2text {}", "source2": "html2text {}", "unified_diff": "@@ -1879,15 +1879,15 @@\n verify the number of rows affected on a target version.\n [\b[m\bms\bss\bsq\bql\bl]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n Added a rule to SQL Server index reflection to ignore the so-called \u201cheap\u201d\n index that is implicitly present on a table that does not specify a clustered\n index.\n References: _\b#_\b4_\b0_\b5_\b9\n *\b**\b**\b**\b* o\bor\bra\bac\bcl\ble\be_\b?\b\u00b6 *\b**\b**\b**\b*\n- * [\b[o\bor\bra\bac\bcl\ble\be]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[b\bbu\bug\bg]\b] [\b[p\bpy\by2\b2k\bk]\b] _\b\u00b6\n+ * [\b[o\bor\bra\bac\bcl\ble\be]\b] [\b[b\bbu\bug\bg]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[p\bpy\by2\b2k\bk]\b] _\b\u00b6\n Fixed performance regression caused by the fix for _\b#_\b3_\b9_\b3_\b7 where cx_Oracle\n as of version 5.3 dropped the .UNICODE symbol from its namespace, which\n was interpreted as cx_Oracle\u2019s \u201cWITH_UNICODE\u201d mode being turned on\n unconditionally, which invokes functions on the SQLAlchemy side which\n convert all strings to unicode unconditionally and causing a performance\n impact. In fact, per cx_Oracle\u2019s author the \u201cWITH_UNICODE\u201d mode has been\n removed entirely as of 5.1, so the expensive unicode conversion functions\n"}]}, {"source1": "./usr/share/doc/python-sqlalchemy-doc/html/changelog/changelog_13.html", "source2": "./usr/share/doc/python-sqlalchemy-doc/html/changelog/changelog_13.html", "unified_diff": "@@ -1803,30 +1803,30 @@\n

    \n
  • \n
\n
\n
\n

oracle\u00b6

\n
    \n-
  • [oracle] [performance] [bug] \u00b6

    Changed the implementation of fetching CLOB and BLOB objects to use\n+

  • [oracle] [bug] \u00b6

    Some modifications to how the cx_oracle dialect sets up per-column\n+outputtype handlers for LOB and numeric datatypes to adjust for potential\n+changes coming in cx_Oracle 8.

    \n+

    References: #5246

    \n+

    \n+
  • \n+
  • [oracle] [bug] [performance] \u00b6

    Changed the implementation of fetching CLOB and BLOB objects to use\n cx_Oracle\u2019s native implementation which fetches CLOB/BLOB objects inline\n with other result columns, rather than performing a separate fetch. As\n always, this can be disabled by setting auto_convert_lobs to False.

    \n

    As part of this change, the behavior of a CLOB that was given a blank\n string on INSERT now returns None on SELECT, which is now consistent with\n that of VARCHAR on Oracle.

    \n

    References: #5314

    \n

    \n
  • \n-
  • [oracle] [bug] \u00b6

    Some modifications to how the cx_oracle dialect sets up per-column\n-outputtype handlers for LOB and numeric datatypes to adjust for potential\n-changes coming in cx_Oracle 8.

    \n-

    References: #5246

    \n-

    \n-
  • \n
\n
\n
\n

misc\u00b6

\n
    \n
  • [change] [firebird] \u00b6

    Adjusted dialect loading for firebird:// URIs so the external\n sqlalchemy-firebird dialect will be used if it has been installed,\n@@ -2204,15 +2204,15 @@\n

    misc\u00b6

    \n
      \n
    • [usecase] [ext] \u00b6

      Added keyword arguments to the MutableList.sort() function so that a\n key function as well as the \u201creverse\u201d keyword argument can be provided.

      \n

      References: #5114

      \n

      \n
    • \n-
    • [performance] [bug] \u00b6

      Revised an internal change to the test system added as a result of\n+

    • [bug] [performance] \u00b6

      Revised an internal change to the test system added as a result of\n #5085 where a testing-related module per dialect would be loaded\n unconditionally upon making use of that dialect, pulling in SQLAlchemy\u2019s\n testing framework as well as the ORM into the module import space. This\n would only impact initial startup time and memory to a modest extent,\n however it\u2019s best that these additional modules aren\u2019t reverse-dependent on\n straight Core usage.

      \n

      References: #5180

      \n", "details": [{"source1": "html2text {}", "source2": "html2text {}", "unified_diff": "@@ -1144,28 +1144,28 @@\n References: _\b#_\b5_\b2_\b5_\b5\n [\b[m\bms\bss\bsq\bql\bl]\b] [\b[b\bbu\bug\bg]\b] [\b[r\bre\bef\bfl\ble\bec\bct\bti\bio\bon\bn]\b] _\b\u00b6\n Fix a regression introduced by the reflection of computed column in MSSQL when\n using SQL server versions before 2012, which does not support the concat\n function.\n References: _\b#_\b5_\b2_\b7_\b1\n *\b**\b**\b**\b* o\bor\bra\bac\bcl\ble\be_\b?\b\u00b6 *\b**\b**\b**\b*\n- * [\b[o\bor\bra\bac\bcl\ble\be]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n- Changed the implementation of fetching CLOB and BLOB objects to use\n- cx_Oracle\u2019s native implementation which fetches CLOB/BLOB objects inline\n- with other result columns, rather than performing a separate fetch. As\n- always, this can be disabled by setting auto_convert_lobs to False.\n- As part of this change, the behavior of a CLOB that was given a blank\n- string on INSERT now returns None on SELECT, which is now consistent with\n- that of VARCHAR on Oracle.\n- References: _\b#_\b5_\b3_\b1_\b4\n-[\b[o\bor\bra\bac\bcl\ble\be]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n-Some modifications to how the cx_oracle dialect sets up per-column outputtype\n-handlers for LOB and numeric datatypes to adjust for potential changes coming\n-in cx_Oracle 8.\n-References: _\b#_\b5_\b2_\b4_\b6\n+ * [\b[o\bor\bra\bac\bcl\ble\be]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n+ Some modifications to how the cx_oracle dialect sets up per-column\n+ outputtype handlers for LOB and numeric datatypes to adjust for potential\n+ changes coming in cx_Oracle 8.\n+ References: _\b#_\b5_\b2_\b4_\b6\n+[\b[o\bor\bra\bac\bcl\ble\be]\b] [\b[b\bbu\bug\bg]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] _\b\u00b6\n+Changed the implementation of fetching CLOB and BLOB objects to use cx_Oracle\u2019s\n+native implementation which fetches CLOB/BLOB objects inline with other result\n+columns, rather than performing a separate fetch. As always, this can be\n+disabled by setting auto_convert_lobs to False.\n+As part of this change, the behavior of a CLOB that was given a blank string on\n+INSERT now returns None on SELECT, which is now consistent with that of VARCHAR\n+on Oracle.\n+References: _\b#_\b5_\b3_\b1_\b4\n *\b**\b**\b**\b* m\bmi\bis\bsc\bc_\b?\b\u00b6 *\b**\b**\b**\b*\n * [\b[c\bch\bha\ban\bng\bge\be]\b] [\b[f\bfi\bir\bre\beb\bbi\bir\brd\bd]\b] _\b\u00b6\n Adjusted dialect loading for firebird:// URIs so the external sqlalchemy-\n firebird dialect will be used if it has been installed, otherwise fall\n back to the (now deprecated) internal Firebird dialect.\n References: _\b#_\b5_\b2_\b7_\b8\n *\b**\b**\b**\b**\b* 1\b1.\b.3\b3.\b.1\b16\b6_\b?\b\u00b6 *\b**\b**\b**\b**\b*\n@@ -1409,15 +1409,15 @@\n but owned by someone else. Pull request courtesy Dave Hirschfeld.\n References: _\b#_\b5_\b1_\b4_\b6\n *\b**\b**\b**\b* m\bmi\bis\bsc\bc_\b?\b\u00b6 *\b**\b**\b**\b*\n * [\b[u\bus\bse\bec\bca\bas\bse\be]\b] [\b[e\bex\bxt\bt]\b] _\b\u00b6\n Added keyword arguments to the _\bM_\bu_\bt_\ba_\bb_\bl_\be_\bL_\bi_\bs_\bt_\b._\bs_\bo_\br_\bt_\b(_\b) function so that a key\n function as well as the \u201creverse\u201d keyword argument can be provided.\n References: _\b#_\b5_\b1_\b1_\b4\n-[\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n+[\b[b\bbu\bug\bg]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] _\b\u00b6\n Revised an internal change to the test system added as a result of _\b#_\b5_\b0_\b8_\b5 where\n a testing-related module per dialect would be loaded unconditionally upon\n making use of that dialect, pulling in SQLAlchemy\u2019s testing framework as well\n as the ORM into the module import space. This would only impact initial startup\n time and memory to a modest extent, however it\u2019s best that these additional\n modules aren\u2019t reverse-dependent on straight Core usage.\n References: _\b#_\b5_\b1_\b8_\b0\n"}]}, {"source1": "./usr/share/doc/python-sqlalchemy-doc/html/changelog/changelog_14.html", "source2": "./usr/share/doc/python-sqlalchemy-doc/html/changelog/changelog_14.html", "unified_diff": "@@ -1226,28 +1226,28 @@\n

      \n
    • \n
    \n
\n
\n

sql\u00b6

\n \n
\n
\n

mypy\u00b6

\n
    \n
  • [mypy] [bug] \u00b6

    The deprecated mypy plugin is no longer fully functional with the latest\n series of mypy 1.11.0, as changes in the mypy interpreter are no longer\n@@ -3065,36 +3065,36 @@\n attributes and entities that are installed as part of an Insert,\n Update, or Delete construct. The\n Select.column_descriptions accessor is also now implemented for\n Core-only selectables.

    \n

    References: #7861

    \n

    \n
  • \n-
  • [orm] [performance] [bug] \u00b6

    Improvements in memory usage by the ORM, removing a significant set of\n-intermediary expression objects that are typically stored when a copy of an\n-expression object is created. These clones have been greatly reduced,\n-reducing the number of total expression objects stored in memory by\n-ORM mappings by about 30%.

    \n-

    References: #7823

    \n-

    \n-
  • \n-
  • [orm] [bug] [regression] \u00b6

    Fixed regression in \u201cdynamic\u201d loader strategy where the\n+

  • [orm] [bug] [regression] \u00b6

    Fixed regression in \u201cdynamic\u201d loader strategy where the\n Query.filter_by() method would not be given an appropriate\n entity to filter from, in the case where a \u201csecondary\u201d table were present\n in the relationship being queried and the mapping were against something\n complex such as a \u201cwith polymorphic\u201d.

    \n

    References: #7868

    \n

    \n
  • \n-
  • [orm] [bug] \u00b6

    Fixed bug where composite() attributes would not work in\n+

  • [orm] [bug] \u00b6

    Fixed bug where composite() attributes would not work in\n conjunction with the selectin_polymorphic() loader strategy for\n joined table inheritance.

    \n

    References: #7801

    \n

    \n
  • \n+
  • [orm] [bug] [performance] \u00b6

    Improvements in memory usage by the ORM, removing a significant set of\n+intermediary expression objects that are typically stored when a copy of an\n+expression object is created. These clones have been greatly reduced,\n+reducing the number of total expression objects stored in memory by\n+ORM mappings by about 30%.

    \n+

    References: #7823

    \n+

    \n+
  • \n
  • [orm] [bug] \u00b6

    Fixed issue where the selectin_polymorphic() loader option would\n not work with joined inheritance mappers that don\u2019t have a fixed\n \u201cpolymorphic_on\u201d column. Additionally added test support for a wider\n variety of usage patterns with this construct.

    \n

    References: #7799

    \n

    \n
  • \n@@ -4821,15 +4821,15 @@\n

    \n \n
\n
\n
\n

oracle\u00b6

\n
    \n-
  • [oracle] [performance] [bug] \u00b6

    Added a CAST(VARCHAR2(128)) to the \u201ctable name\u201d, \u201cowner\u201d, and other\n+

  • [oracle] [bug] [performance] \u00b6

    Added a CAST(VARCHAR2(128)) to the \u201ctable name\u201d, \u201cowner\u201d, and other\n DDL-name parameters as used in reflection queries against Oracle system\n views such as ALL_TABLES, ALL_TAB_CONSTRAINTS, etc to better enable\n indexing to take place against these columns, as they previously would be\n implicitly handled as NVARCHAR2 due to Python\u2019s use of Unicode for strings;\n these columns are documented in all Oracle versions as being VARCHAR2 with\n lengths varying from 30 to 128 characters depending on server version.\n Additionally, test support has been enabled for Unicode-named DDL\n@@ -5544,24 +5544,15 @@\n

\n
\n
\n

1.4.18\u00b6

\n Released: June 10, 2021
\n

orm\u00b6

\n
    \n-
  • [orm] [performance] [bug] [regression] \u00b6

    Fixed regression involving how the ORM would resolve a given mapped column\n-to a result row, where under cases such as joined eager loading, a slightly\n-more expensive \u201cfallback\u201d could take place to set up this resolution due to\n-some logic that was removed since 1.3. The issue could also cause\n-deprecation warnings involving column resolution to be emitted when using a\n-1.4 style query with joined eager loading.

    \n-

    References: #6596

    \n-

    \n-
  • \n-
  • [orm] [bug] \u00b6

    Clarified the current purpose of the\n+

  • [orm] [bug] \u00b6

    Clarified the current purpose of the\n relationship.bake_queries flag, which in 1.4 is to enable\n or disable \u201clambda caching\u201d of statements within the \u201clazyload\u201d and\n \u201cselectinload\u201d loader strategies; this is separate from the more\n foundational SQL query cache that is used for most statements.\n Additionally, the lazy loader no longer uses its own cache for many-to-one\n SQL queries, which was an implementation quirk that doesn\u2019t exist for any\n other loader scenario. Finally, the \u201clru cache\u201d warning that the lazyloader\n@@ -5571,29 +5562,38 @@\n setting bake_queries=False for such a relationship will remove this\n cache from being used, there\u2019s no particular performance gain in this case\n as using no caching vs. using a cache that needs to refresh often likely\n still wins out on the caching being used side.

    \n

    References: #6072, #6487

    \n

    \n
  • \n-
  • [orm] [bug] [regression] \u00b6

    Adjusted the means by which classes such as scoped_session\n+

  • [orm] [bug] [regression] \u00b6

    Adjusted the means by which classes such as scoped_session\n and AsyncSession are generated from the base\n Session class, such that custom Session\n subclasses such as that used by Flask-SQLAlchemy don\u2019t need to implement\n positional arguments when they call into the superclass method, and can\n continue using the same argument styles as in previous releases.

    \n

    References: #6285

    \n

    \n
  • \n-
  • [orm] [bug] [regression] \u00b6

    Fixed issue where query production for joinedload against a complex left\n+

  • [orm] [bug] [regression] \u00b6

    Fixed issue where query production for joinedload against a complex left\n hand side involving joined-table inheritance could fail to produce a\n correct query, due to a clause adaption issue.

    \n

    References: #6595

    \n

    \n
  • \n+
  • [orm] [bug] [performance] [regression] \u00b6

    Fixed regression involving how the ORM would resolve a given mapped column\n+to a result row, where under cases such as joined eager loading, a slightly\n+more expensive \u201cfallback\u201d could take place to set up this resolution due to\n+some logic that was removed since 1.3. The issue could also cause\n+deprecation warnings involving column resolution to be emitted when using a\n+1.4 style query with joined eager loading.

    \n+

    References: #6596

    \n+

    \n+
  • \n
  • [orm] [bug] \u00b6

    Fixed issue in experimental \u201cselect ORM objects from INSERT/UPDATE\u201d use\n case where an error was raised if the statement were against a\n single-table-inheritance subclass.

    \n

    References: #6591

    \n

    \n
  • \n
  • [orm] [bug] \u00b6

    The warning that\u2019s emitted for relationship() when multiple\n@@ -6437,15 +6437,15 @@\n synonyms can be established linking to these constructs which work\n fully. This is a behavior that was semi-explicitly disallowed previously,\n however since it did not fail in every scenario, explicit support\n for assoc proxy and hybrids has been added.

    \n

    References: #6267

    \n

    \n
  • \n-
  • [orm] [performance] [bug] [regression] [sql] \u00b6

    Fixed a critical performance issue where the traversal of a\n+

  • [orm] [bug] [performance] [regression] [sql] \u00b6

    Fixed a critical performance issue where the traversal of a\n select() construct would traverse a repetitive product of the\n represented FROM clauses as they were each referenced by columns in\n the columns clause; for a series of nested subqueries with lots of columns\n this could cause a large delay and significant memory growth. This\n traversal is used by a wide variety of SQL and ORM functions, including by\n the ORM Session when it\u2019s configured to have\n \u201ctable-per-bind\u201d, which while this is not a common use case, it seems to be\n", "details": [{"source1": "html2text {}", "source2": "html2text {}", "unified_diff": "@@ -808,24 +808,24 @@\n sqlalchemy.util.await_only() directly.\n [\b[e\ben\bng\bgi\bin\bne\be]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n Adjustments to the C extensions, which are specific to the SQLAlchemy 1.x\n series, to work under Python 3.13. Pull request courtesy Ben Beasley.\n References: _\b#_\b1_\b1_\b4_\b9_\b9\n *\b**\b**\b**\b* s\bsq\bql\bl_\b?\b\u00b6 *\b**\b**\b**\b*\n * [\b[s\bsq\bql\bl]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n- Fixed caching issue where using the _\bT_\be_\bx_\bt_\bu_\ba_\bl_\bS_\be_\bl_\be_\bc_\bt_\b._\ba_\bd_\bd_\b__\bc_\bt_\be_\b(_\b) method of the\n- _\bT_\be_\bx_\bt_\bu_\ba_\bl_\bS_\be_\bl_\be_\bc_\bt construct would not set a correct cache key which\n- distinguished between different CTE expressions.\n- References: _\b#_\b1_\b1_\b4_\b7_\b1\n-[\b[s\bsq\bql\bl]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n-Fixed caching issue where the _\bS_\be_\bl_\be_\bc_\bt_\b._\bw_\bi_\bt_\bh_\b__\bf_\bo_\br_\b__\bu_\bp_\bd_\ba_\bt_\be_\b._\bk_\be_\by_\b__\bs_\bh_\ba_\br_\be element of\n-_\bS_\be_\bl_\be_\bc_\bt_\b._\bw_\bi_\bt_\bh_\b__\bf_\bo_\br_\b__\bu_\bp_\bd_\ba_\bt_\be_\b(_\b) was not considered as part of the cache key, leading\n-to incorrect caching if different variations of this parameter were used with\n-an otherwise identical statement.\n-References: _\b#_\b1_\b1_\b5_\b4_\b4\n+ Fixed caching issue where the _\bS_\be_\bl_\be_\bc_\bt_\b._\bw_\bi_\bt_\bh_\b__\bf_\bo_\br_\b__\bu_\bp_\bd_\ba_\bt_\be_\b._\bk_\be_\by_\b__\bs_\bh_\ba_\br_\be element of\n+ _\bS_\be_\bl_\be_\bc_\bt_\b._\bw_\bi_\bt_\bh_\b__\bf_\bo_\br_\b__\bu_\bp_\bd_\ba_\bt_\be_\b(_\b) was not considered as part of the cache key,\n+ leading to incorrect caching if different variations of this parameter\n+ were used with an otherwise identical statement.\n+ References: _\b#_\b1_\b1_\b5_\b4_\b4\n+[\b[s\bsq\bql\bl]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n+Fixed caching issue where using the _\bT_\be_\bx_\bt_\bu_\ba_\bl_\bS_\be_\bl_\be_\bc_\bt_\b._\ba_\bd_\bd_\b__\bc_\bt_\be_\b(_\b) method of the\n+_\bT_\be_\bx_\bt_\bu_\ba_\bl_\bS_\be_\bl_\be_\bc_\bt construct would not set a correct cache key which distinguished\n+between different CTE expressions.\n+References: _\b#_\b1_\b1_\b4_\b7_\b1\n *\b**\b**\b**\b* m\bmy\byp\bpy\by_\b?\b\u00b6 *\b**\b**\b**\b*\n * [\b[m\bmy\byp\bpy\by]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n The deprecated mypy plugin is no longer fully functional with the latest\n series of mypy 1.11.0, as changes in the mypy interpreter are no longer\n compatible with the approach used by the plugin. If code is dependent on\n the mypy plugin with sqlalchemy2-stubs, it\u2019s recommended to pin mypy to\n be below the 1.11.0 series. Seek upgrading to the 2.0 series of\n@@ -2032,31 +2032,31 @@\n [\b[o\bor\brm\bm]\b] [\b[u\bus\bse\bec\bca\bas\bse\be]\b] _\b\u00b6\n Added new attributes _\bU_\bp_\bd_\ba_\bt_\be_\bB_\ba_\bs_\be_\b._\br_\be_\bt_\bu_\br_\bn_\bi_\bn_\bg_\b__\bc_\bo_\bl_\bu_\bm_\bn_\b__\bd_\be_\bs_\bc_\br_\bi_\bp_\bt_\bi_\bo_\bn_\bs and\n _\bU_\bp_\bd_\ba_\bt_\be_\bB_\ba_\bs_\be_\b._\be_\bn_\bt_\bi_\bt_\by_\b__\bd_\be_\bs_\bc_\br_\bi_\bp_\bt_\bi_\bo_\bn to allow for inspection of ORM attributes and\n entities that are installed as part of an _\bI_\bn_\bs_\be_\br_\bt, _\bU_\bp_\bd_\ba_\bt_\be, or _\bD_\be_\bl_\be_\bt_\be construct.\n The _\bS_\be_\bl_\be_\bc_\bt_\b._\bc_\bo_\bl_\bu_\bm_\bn_\b__\bd_\be_\bs_\bc_\br_\bi_\bp_\bt_\bi_\bo_\bn_\bs accessor is also now implemented for Core-only\n selectables.\n References: _\b#_\b7_\b8_\b6_\b1\n-[\b[o\bor\brm\bm]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n-Improvements in memory usage by the ORM, removing a significant set of\n-intermediary expression objects that are typically stored when a copy of an\n-expression object is created. These clones have been greatly reduced, reducing\n-the number of total expression objects stored in memory by ORM mappings by\n-about 30%.\n-References: _\b#_\b7_\b8_\b2_\b3\n [\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] [\b[r\bre\beg\bgr\bre\bes\bss\bsi\bio\bon\bn]\b] _\b\u00b6\n Fixed regression in \u201cdynamic\u201d loader strategy where the _\bQ_\bu_\be_\br_\by_\b._\bf_\bi_\bl_\bt_\be_\br_\b__\bb_\by_\b(_\b)\n method would not be given an appropriate entity to filter from, in the case\n where a \u201csecondary\u201d table were present in the relationship being queried and\n the mapping were against something complex such as a \u201cwith polymorphic\u201d.\n References: _\b#_\b7_\b8_\b6_\b8\n [\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n Fixed bug where _\bc_\bo_\bm_\bp_\bo_\bs_\bi_\bt_\be_\b(_\b) attributes would not work in conjunction with the\n _\bs_\be_\bl_\be_\bc_\bt_\bi_\bn_\b__\bp_\bo_\bl_\by_\bm_\bo_\br_\bp_\bh_\bi_\bc_\b(_\b) loader strategy for joined table inheritance.\n References: _\b#_\b7_\b8_\b0_\b1\n+[\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] _\b\u00b6\n+Improvements in memory usage by the ORM, removing a significant set of\n+intermediary expression objects that are typically stored when a copy of an\n+expression object is created. These clones have been greatly reduced, reducing\n+the number of total expression objects stored in memory by ORM mappings by\n+about 30%.\n+References: _\b#_\b7_\b8_\b2_\b3\n [\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n Fixed issue where the _\bs_\be_\bl_\be_\bc_\bt_\bi_\bn_\b__\bp_\bo_\bl_\by_\bm_\bo_\br_\bp_\bh_\bi_\bc_\b(_\b) loader option would not work with\n joined inheritance mappers that don\u2019t have a fixed \u201cpolymorphic_on\u201d column.\n Additionally added test support for a wider variety of usage patterns with this\n construct.\n References: _\b#_\b7_\b7_\b9_\b9\n [\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n@@ -3255,15 +3255,15 @@\n * [\b[m\bms\bss\bsq\bql\bl]\b] [\b[b\bbu\bug\bg]\b] [\b[r\bre\bef\bfl\ble\bec\bct\bti\bio\bon\bn]\b] _\b\u00b6\n Fixed an issue where sqlalchemy.engine.reflection.has_table() returned\n True for local temporary tables that actually belonged to a different SQL\n Server session (connection). An extra check is now performed to ensure\n that the temp table detected is in fact owned by the current session.\n References: _\b#_\b6_\b9_\b1_\b0\n *\b**\b**\b**\b* o\bor\bra\bac\bcl\ble\be_\b?\b\u00b6 *\b**\b**\b**\b*\n- * [\b[o\bor\bra\bac\bcl\ble\be]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n+ * [\b[o\bor\bra\bac\bcl\ble\be]\b] [\b[b\bbu\bug\bg]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] _\b\u00b6\n Added a CAST(VARCHAR2(128)) to the \u201ctable name\u201d, \u201cowner\u201d, and other DDL-\n name parameters as used in reflection queries against Oracle system views\n such as ALL_TABLES, ALL_TAB_CONSTRAINTS, etc to better enable indexing to\n take place against these columns, as they previously would be implicitly\n handled as NVARCHAR2 due to Python\u2019s use of Unicode for strings; these\n columns are documented in all Oracle versions as being VARCHAR2 with\n lengths varying from 30 to 128 characters depending on server version.\n@@ -3763,50 +3763,51 @@\n the INSERT thus triggering SQLAlchemy\u2019s feature of setting IDENTITY INSERT to\n \u201con\u201d; it\u2019s in this directive where the schema translate map would fail to be\n honored.\n References: _\b#_\b6_\b6_\b5_\b8\n *\b**\b**\b**\b**\b* 1\b1.\b.4\b4.\b.1\b18\b8_\b?\b\u00b6 *\b**\b**\b**\b**\b*\n Released: June 10, 2021\n *\b**\b**\b**\b* o\bor\brm\bm_\b?\b\u00b6 *\b**\b**\b**\b*\n- * [\b[o\bor\brm\bm]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[b\bbu\bug\bg]\b] [\b[r\bre\beg\bgr\bre\bes\bss\bsi\bio\bon\bn]\b] _\b\u00b6\n- Fixed regression involving how the ORM would resolve a given mapped\n- column to a result row, where under cases such as joined eager loading, a\n- slightly more expensive \u201cfallback\u201d could take place to set up this\n- resolution due to some logic that was removed since 1.3. The issue could\n- also cause deprecation warnings involving column resolution to be emitted\n- when using a 1.4 style query with joined eager loading.\n- References: _\b#_\b6_\b5_\b9_\b6\n-[\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n-Clarified the current purpose of the _\br_\be_\bl_\ba_\bt_\bi_\bo_\bn_\bs_\bh_\bi_\bp_\b._\bb_\ba_\bk_\be_\b__\bq_\bu_\be_\br_\bi_\be_\bs flag, which in\n-1.4 is to enable or disable \u201clambda caching\u201d of statements within the\n-\u201clazyload\u201d and \u201cselectinload\u201d loader strategies; this is separate from the more\n-foundational SQL query cache that is used for most statements. Additionally,\n-the lazy loader no longer uses its own cache for many-to-one SQL queries, which\n-was an implementation quirk that doesn\u2019t exist for any other loader scenario.\n-Finally, the \u201clru cache\u201d warning that the lazyloader and selectinloader\n-strategies could emit when handling a wide array of class/relationship\n-combinations has been removed; based on analysis of some end-user cases, this\n-warning doesn\u2019t suggest any significant issue. While setting bake_queries=False\n-for such a relationship will remove this cache from being used, there\u2019s no\n-particular performance gain in this case as using no caching vs. using a cache\n-that needs to refresh often likely still wins out on the caching being used\n-side.\n-References: _\b#_\b6_\b0_\b7_\b2, _\b#_\b6_\b4_\b8_\b7\n+ * [\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n+ Clarified the current purpose of the _\br_\be_\bl_\ba_\bt_\bi_\bo_\bn_\bs_\bh_\bi_\bp_\b._\bb_\ba_\bk_\be_\b__\bq_\bu_\be_\br_\bi_\be_\bs flag,\n+ which in 1.4 is to enable or disable \u201clambda caching\u201d of statements\n+ within the \u201clazyload\u201d and \u201cselectinload\u201d loader strategies; this is\n+ separate from the more foundational SQL query cache that is used for most\n+ statements. Additionally, the lazy loader no longer uses its own cache\n+ for many-to-one SQL queries, which was an implementation quirk that\n+ doesn\u2019t exist for any other loader scenario. Finally, the \u201clru cache\u201d\n+ warning that the lazyloader and selectinloader strategies could emit when\n+ handling a wide array of class/relationship combinations has been\n+ removed; based on analysis of some end-user cases, this warning doesn\u2019t\n+ suggest any significant issue. While setting bake_queries=False for such\n+ a relationship will remove this cache from being used, there\u2019s no\n+ particular performance gain in this case as using no caching vs. using a\n+ cache that needs to refresh often likely still wins out on the caching\n+ being used side.\n+ References: _\b#_\b6_\b0_\b7_\b2, _\b#_\b6_\b4_\b8_\b7\n [\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] [\b[r\bre\beg\bgr\bre\bes\bss\bsi\bio\bon\bn]\b] _\b\u00b6\n Adjusted the means by which classes such as _\bs_\bc_\bo_\bp_\be_\bd_\b__\bs_\be_\bs_\bs_\bi_\bo_\bn and _\bA_\bs_\by_\bn_\bc_\bS_\be_\bs_\bs_\bi_\bo_\bn are\n generated from the base _\bS_\be_\bs_\bs_\bi_\bo_\bn class, such that custom _\bS_\be_\bs_\bs_\bi_\bo_\bn subclasses such\n as that used by Flask-SQLAlchemy don\u2019t need to implement positional arguments\n when they call into the superclass method, and can continue using the same\n argument styles as in previous releases.\n References: _\b#_\b6_\b2_\b8_\b5\n [\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] [\b[r\bre\beg\bgr\bre\bes\bss\bsi\bio\bon\bn]\b] _\b\u00b6\n Fixed issue where query production for joinedload against a complex left hand\n side involving joined-table inheritance could fail to produce a correct query,\n due to a clause adaption issue.\n References: _\b#_\b6_\b5_\b9_\b5\n+[\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[r\bre\beg\bgr\bre\bes\bss\bsi\bio\bon\bn]\b] _\b\u00b6\n+Fixed regression involving how the ORM would resolve a given mapped column to a\n+result row, where under cases such as joined eager loading, a slightly more\n+expensive \u201cfallback\u201d could take place to set up this resolution due to some\n+logic that was removed since 1.3. The issue could also cause deprecation\n+warnings involving column resolution to be emitted when using a 1.4 style query\n+with joined eager loading.\n+References: _\b#_\b6_\b5_\b9_\b6\n [\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n Fixed issue in experimental \u201cselect ORM objects from INSERT/UPDATE\u201d use case\n where an error was raised if the statement were against a single-table-\n inheritance subclass.\n References: _\b#_\b6_\b5_\b9_\b1\n [\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n The warning that\u2019s emitted for _\br_\be_\bl_\ba_\bt_\bi_\bo_\bn_\bs_\bh_\bi_\bp_\b(_\b) when multiple relationships would\n@@ -4376,15 +4377,15 @@\n Established support for synoynm() in conjunction with hybrid property,\n assocaitionproxy is set up completely, including that synonyms can be\n established linking to these constructs which work fully. This is a\n behavior that was semi-explicitly disallowed previously, however since it\n did not fail in every scenario, explicit support for assoc proxy and\n hybrids has been added.\n References: _\b#_\b6_\b2_\b6_\b7\n-[\b[o\bor\brm\bm]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[b\bbu\bug\bg]\b] [\b[r\bre\beg\bgr\bre\bes\bss\bsi\bio\bon\bn]\b] [\b[s\bsq\bql\bl]\b] _\b\u00b6\n+[\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[r\bre\beg\bgr\bre\bes\bss\bsi\bio\bon\bn]\b] [\b[s\bsq\bql\bl]\b] _\b\u00b6\n Fixed a critical performance issue where the traversal of a _\bs_\be_\bl_\be_\bc_\bt_\b(_\b) construct\n would traverse a repetitive product of the represented FROM clauses as they\n were each referenced by columns in the columns clause; for a series of nested\n subqueries with lots of columns this could cause a large delay and significant\n memory growth. This traversal is used by a wide variety of SQL and ORM\n functions, including by the ORM _\bS_\be_\bs_\bs_\bi_\bo_\bn when it\u2019s configured to have \u201ctable-\n per-bind\u201d, which while this is not a common use case, it seems to be what\n"}]}, {"source1": "./usr/share/doc/python-sqlalchemy-doc/html/changelog/changelog_20.html", "source2": "./usr/share/doc/python-sqlalchemy-doc/html/changelog/changelog_20.html", "unified_diff": "@@ -1573,32 +1573,32 @@\n

\n
\n
\n

2.0.28\u00b6

\n Released: March 4, 2024
\n

orm\u00b6

\n
    \n-
  • [orm] [performance] [bug] [regression] \u00b6

    Adjusted the fix made in #10570, released in 2.0.23, where new\n+

  • [orm] [bug] [regression] \u00b6

    Fixed regression caused by #9779 where using the \u201csecondary\u201d table\n+in a relationship and_() expression would fail to be aliased to match\n+how the \u201csecondary\u201d table normally renders within a\n+Select.join() expression, leading to an invalid query.

    \n+

    References: #11010

    \n+

    \n+
  • \n+
  • [orm] [bug] [performance] [regression] \u00b6

    Adjusted the fix made in #10570, released in 2.0.23, where new\n logic was added to reconcile possibly changing bound parameter values\n across cache key generations used within the with_expression()\n construct. The new logic changes the approach by which the new bound\n parameter values are associated with the statement, avoiding the need to\n deep-copy the statement which can result in a significant performance\n penalty for very deep / complex SQL constructs. The new approach no longer\n requires this deep-copy step.

    \n

    References: #11085

    \n

    \n
  • \n-
  • [orm] [bug] [regression] \u00b6

    Fixed regression caused by #9779 where using the \u201csecondary\u201d table\n-in a relationship and_() expression would fail to be aliased to match\n-how the \u201csecondary\u201d table normally renders within a\n-Select.join() expression, leading to an invalid query.

    \n-

    References: #11010

    \n-

    \n-
  • \n
\n
\n
\n

engine\u00b6

\n \n
\n
\n

oracle\u00b6

\n
    \n-
  • [oracle] [performance] [bug] \u00b6

    Changed the default arraysize of the Oracle dialects so that the value set\n+

  • [oracle] [bug] [performance] \u00b6

    Changed the default arraysize of the Oracle dialects so that the value set\n by the driver is used, that is 100 at the time of writing for both\n cx_oracle and oracledb. Previously the value was set to 50 by default. The\n setting of 50 could cause significant performance regressions compared to\n when using cx_oracle/oracledb alone to fetch many hundreds of rows over\n slower networks.

    \n

    References: #10877

    \n

    \n@@ -6073,39 +6073,39 @@\n relationship() etc. to provide for the Python dataclasses\n compare parameter on field(), when using the\n Declarative Dataclass Mapping feature. Pull request courtesy\n Simon Schiele.

    \n

    References: #8905

    \n

    \n
  • \n-
  • [orm] [performance] [bug] \u00b6

    Additional performance enhancements within ORM-enabled SQL statements,\n-specifically targeting callcounts within the construction of ORM\n-statements, using combinations of aliased() with\n-union() and similar \u201ccompound\u201d constructs, in addition to direct\n-performance improvements to the corresponding_column() internal method\n-that is used heavily by the ORM by constructs like aliased() and\n-similar.

    \n-

    References: #8796

    \n-

    \n-
  • \n-
  • [orm] [bug] \u00b6

    Fixed issue where use of an unknown datatype within a Mapped\n+

  • [orm] [bug] \u00b6

    Fixed issue where use of an unknown datatype within a Mapped\n annotation for a column-based attribute would silently fail to map the\n attribute, rather than reporting an exception; an informative exception\n message is now raised.

    \n

    References: #8888

    \n

    \n
  • \n-
  • [orm] [bug] \u00b6

    Fixed a suite of issues involving Mapped use with dictionary\n+

  • [orm] [bug] \u00b6

    Fixed a suite of issues involving Mapped use with dictionary\n types, such as Mapped[Dict[str, str] | None], would not be correctly\n interpreted in Declarative ORM mappings. Support to correctly\n \u201cde-optionalize\u201d this type including for lookup in type_annotation_map\n has been fixed.

    \n

    References: #8777

    \n

    \n
  • \n+
  • [orm] [bug] [performance] \u00b6

    Additional performance enhancements within ORM-enabled SQL statements,\n+specifically targeting callcounts within the construction of ORM\n+statements, using combinations of aliased() with\n+union() and similar \u201ccompound\u201d constructs, in addition to direct\n+performance improvements to the corresponding_column() internal method\n+that is used heavily by the ORM by constructs like aliased() and\n+similar.

    \n+

    References: #8796

    \n+

    \n+
  • \n
  • [orm] [bug] \u00b6

    Fixed bug in Declarative Dataclass Mapping feature where using\n plain dataclass fields with the __allow_unmapped__ directive in a\n mapping would not create a dataclass with the correct class-level state for\n those fields, copying the raw Field object to the class inappropriately\n after dataclasses itself had replaced the Field object with the\n class-level default value.

    \n

    References: #8880

    \n@@ -7952,29 +7952,15 @@\n that may refer to additional tables within the WHERE criteria of the\n statement without the need to use subqueries. This syntax is invoked\n automatically when using the Update construct when more than\n one table or other entity or selectable is used.

    \n

    References: #7185

    \n

    \n
  • \n-
  • [sqlite] [performance] [bug] \u00b6

    The SQLite dialect now defaults to QueuePool when a file\n-based database is used. This is set along with setting the\n-check_same_thread parameter to False. It has been observed that the\n-previous approach of defaulting to NullPool, which does not\n-hold onto database connections after they are released, did in fact have a\n-measurable negative performance impact. As always, the pool class is\n-customizable via the create_engine.poolclass parameter.

    \n-\n-

    References: #7490

    \n-

    \n-
  • \n-
  • [sqlite] [bug] \u00b6

    Removed the warning that emits from the Numeric type about\n+

  • [sqlite] [bug] \u00b6

    Removed the warning that emits from the Numeric type about\n DBAPIs not supporting Decimal values natively. This warning was oriented\n towards SQLite, which does not have any real way without additional\n extensions or workarounds of handling precision numeric values more than 15\n significant digits as it only uses floating point math to represent\n numbers. As this is a known and documented limitation in SQLite itself, and\n not a quirk of the pysqlite driver, there\u2019s no need for SQLAlchemy to warn\n for this. The change does not otherwise modify how precision numerics are\n@@ -7982,14 +7968,28 @@\n as configured with the Numeric, Float , and\n related datatypes, just without the ability to maintain precision beyond 15\n significant digits when using SQLite, unless alternate representations such\n as strings are used.

    \n

    References: #7299

    \n

    \n
  • \n+
  • [sqlite] [bug] [performance] \u00b6

    The SQLite dialect now defaults to QueuePool when a file\n+based database is used. This is set along with setting the\n+check_same_thread parameter to False. It has been observed that the\n+previous approach of defaulting to NullPool, which does not\n+hold onto database connections after they are released, did in fact have a\n+measurable negative performance impact. As always, the pool class is\n+customizable via the create_engine.poolclass parameter.

    \n+\n+

    References: #7490

    \n+

    \n+
  • \n
\n
\n
\n

mssql\u00b6

\n
    \n
  • [mssql] [usecase] \u00b6

    Implemented reflection of the \u201cclustered index\u201d flag mssql_clustered\n for the SQL Server dialect. Pull request courtesy John Lennox.

    \n", "details": [{"source1": "html2text {}", "source2": "html2text {}", "unified_diff": "@@ -1021,30 +1021,29 @@\n should hopefully prevent issues with large suite runs on CPU loaded\n hardware where the event loop seems to become corrupted, leading to\n cascading failures.\n References: _\b#_\b1_\b1_\b1_\b8_\b7\n *\b**\b**\b**\b**\b* 2\b2.\b.0\b0.\b.2\b28\b8_\b?\b\u00b6 *\b**\b**\b**\b**\b*\n Released: March 4, 2024\n *\b**\b**\b**\b* o\bor\brm\bm_\b?\b\u00b6 *\b**\b**\b**\b*\n- * [\b[o\bor\brm\bm]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[b\bbu\bug\bg]\b] [\b[r\bre\beg\bgr\bre\bes\bss\bsi\bio\bon\bn]\b] _\b\u00b6\n- Adjusted the fix made in _\b#_\b1_\b0_\b5_\b7_\b0, released in 2.0.23, where new logic was\n- added to reconcile possibly changing bound parameter values across cache\n- key generations used within the _\bw_\bi_\bt_\bh_\b__\be_\bx_\bp_\br_\be_\bs_\bs_\bi_\bo_\bn_\b(_\b) construct. The new\n- logic changes the approach by which the new bound parameter values are\n- associated with the statement, avoiding the need to deep-copy the\n- statement which can result in a significant performance penalty for very\n- deep / complex SQL constructs. The new approach no longer requires this\n- deep-copy step.\n- References: _\b#_\b1_\b1_\b0_\b8_\b5\n-[\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] [\b[r\bre\beg\bgr\bre\bes\bss\bsi\bio\bon\bn]\b] _\b\u00b6\n-Fixed regression caused by _\b#_\b9_\b7_\b7_\b9 where using the \u201csecondary\u201d table in a\n-relationship and_() expression would fail to be aliased to match how the\n-\u201csecondary\u201d table normally renders within a _\bS_\be_\bl_\be_\bc_\bt_\b._\bj_\bo_\bi_\bn_\b(_\b) expression, leading\n-to an invalid query.\n-References: _\b#_\b1_\b1_\b0_\b1_\b0\n+ * [\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] [\b[r\bre\beg\bgr\bre\bes\bss\bsi\bio\bon\bn]\b] _\b\u00b6\n+ Fixed regression caused by _\b#_\b9_\b7_\b7_\b9 where using the \u201csecondary\u201d table in a\n+ relationship and_() expression would fail to be aliased to match how the\n+ \u201csecondary\u201d table normally renders within a _\bS_\be_\bl_\be_\bc_\bt_\b._\bj_\bo_\bi_\bn_\b(_\b) expression,\n+ leading to an invalid query.\n+ References: _\b#_\b1_\b1_\b0_\b1_\b0\n+[\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[r\bre\beg\bgr\bre\bes\bss\bsi\bio\bon\bn]\b] _\b\u00b6\n+Adjusted the fix made in _\b#_\b1_\b0_\b5_\b7_\b0, released in 2.0.23, where new logic was added\n+to reconcile possibly changing bound parameter values across cache key\n+generations used within the _\bw_\bi_\bt_\bh_\b__\be_\bx_\bp_\br_\be_\bs_\bs_\bi_\bo_\bn_\b(_\b) construct. The new logic changes\n+the approach by which the new bound parameter values are associated with the\n+statement, avoiding the need to deep-copy the statement which can result in a\n+significant performance penalty for very deep / complex SQL constructs. The new\n+approach no longer requires this deep-copy step.\n+References: _\b#_\b1_\b1_\b0_\b8_\b5\n *\b**\b**\b**\b* e\ben\bng\bgi\bin\bne\be_\b?\b\u00b6 *\b**\b**\b**\b*\n * [\b[e\ben\bng\bgi\bin\bne\be]\b] [\b[u\bus\bse\bec\bca\bas\bse\be]\b] _\b\u00b6\n Added new core execution option\n _\bC_\bo_\bn_\bn_\be_\bc_\bt_\bi_\bo_\bn_\b._\be_\bx_\be_\bc_\bu_\bt_\bi_\bo_\bn_\b__\bo_\bp_\bt_\bi_\bo_\bn_\bs_\b._\bp_\br_\be_\bs_\be_\br_\bv_\be_\b__\br_\bo_\bw_\bc_\bo_\bu_\bn_\bt. When set, the\n cursor.rowcount attribute from the DBAPI cursor will be unconditionally\n memoized at statement execution time, so that whatever value the DBAPI\n offers for any kind of statement will be available using the\n@@ -1191,15 +1190,15 @@\n Fixed an issue regarding the use of the _\bU_\bu_\bi_\bd datatype with the\n _\bU_\bu_\bi_\bd_\b._\ba_\bs_\b__\bu_\bu_\bi_\bd parameter set to False, when using the pymssql dialect. ORM-\n optimized INSERT statements (e.g. the \u201cinsertmanyvalues\u201d feature) would\n not correctly align primary key UUID values for bulk INSERT statements,\n resulting in errors. Similar issues were fixed for the PostgreSQL drivers\n as well.\n *\b**\b**\b**\b* o\bor\bra\bac\bcl\ble\be_\b?\b\u00b6 *\b**\b**\b**\b*\n- * [\b[o\bor\bra\bac\bcl\ble\be]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n+ * [\b[o\bor\bra\bac\bcl\ble\be]\b] [\b[b\bbu\bug\bg]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] _\b\u00b6\n Changed the default arraysize of the Oracle dialects so that the value\n set by the driver is used, that is 100 at the time of writing for both\n cx_oracle and oracledb. Previously the value was set to 50 by default.\n The setting of 50 could cause significant performance regressions\n compared to when using cx_oracle/oracledb alone to fetch many hundreds of\n rows over slower networks.\n References: _\b#_\b1_\b0_\b8_\b7_\b7\n@@ -4086,33 +4085,33 @@\n References: _\b#_\b8_\b8_\b5_\b9\n [\b[o\bor\brm\bm]\b] [\b[u\bus\bse\bec\bca\bas\bse\be]\b] _\b\u00b6\n Added _\bm_\ba_\bp_\bp_\be_\bd_\b__\bc_\bo_\bl_\bu_\bm_\bn_\b._\bc_\bo_\bm_\bp_\ba_\br_\be parameter to relevant ORM attribute constructs\n including _\bm_\ba_\bp_\bp_\be_\bd_\b__\bc_\bo_\bl_\bu_\bm_\bn_\b(_\b), _\br_\be_\bl_\ba_\bt_\bi_\bo_\bn_\bs_\bh_\bi_\bp_\b(_\b) etc. to provide for the Python\n dataclasses compare parameter on field(), when using the _\bD_\be_\bc_\bl_\ba_\br_\ba_\bt_\bi_\bv_\be_\b _\bD_\ba_\bt_\ba_\bc_\bl_\ba_\bs_\bs\n _\bM_\ba_\bp_\bp_\bi_\bn_\bg feature. Pull request courtesy Simon Schiele.\n References: _\b#_\b8_\b9_\b0_\b5\n-[\b[o\bor\brm\bm]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n-Additional performance enhancements within ORM-enabled SQL statements,\n-specifically targeting callcounts within the construction of ORM statements,\n-using combinations of _\ba_\bl_\bi_\ba_\bs_\be_\bd_\b(_\b) with _\bu_\bn_\bi_\bo_\bn_\b(_\b) and similar \u201ccompound\u201d constructs,\n-in addition to direct performance improvements to the corresponding_column()\n-internal method that is used heavily by the ORM by constructs like _\ba_\bl_\bi_\ba_\bs_\be_\bd_\b(_\b)\n-and similar.\n-References: _\b#_\b8_\b7_\b9_\b6\n [\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n Fixed issue where use of an unknown datatype within a _\bM_\ba_\bp_\bp_\be_\bd annotation for a\n column-based attribute would silently fail to map the attribute, rather than\n reporting an exception; an informative exception message is now raised.\n References: _\b#_\b8_\b8_\b8_\b8\n [\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n Fixed a suite of issues involving _\bM_\ba_\bp_\bp_\be_\bd use with dictionary types, such as\n Mapped[Dict[str, str] | None], would not be correctly interpreted in\n Declarative ORM mappings. Support to correctly \u201cde-optionalize\u201d this type\n including for lookup in type_annotation_map has been fixed.\n References: _\b#_\b8_\b7_\b7_\b7\n+[\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] _\b\u00b6\n+Additional performance enhancements within ORM-enabled SQL statements,\n+specifically targeting callcounts within the construction of ORM statements,\n+using combinations of _\ba_\bl_\bi_\ba_\bs_\be_\bd_\b(_\b) with _\bu_\bn_\bi_\bo_\bn_\b(_\b) and similar \u201ccompound\u201d constructs,\n+in addition to direct performance improvements to the corresponding_column()\n+internal method that is used heavily by the ORM by constructs like _\ba_\bl_\bi_\ba_\bs_\be_\bd_\b(_\b)\n+and similar.\n+References: _\b#_\b8_\b7_\b9_\b6\n [\b[o\bor\brm\bm]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n Fixed bug in _\bD_\be_\bc_\bl_\ba_\br_\ba_\bt_\bi_\bv_\be_\b _\bD_\ba_\bt_\ba_\bc_\bl_\ba_\bs_\bs_\b _\bM_\ba_\bp_\bp_\bi_\bn_\bg feature where using plain dataclass\n fields with the __allow_unmapped__ directive in a mapping would not create a\n dataclass with the correct class-level state for those fields, copying the raw\n Field object to the class inappropriately after dataclasses itself had replaced\n the Field object with the class-level default value.\n References: _\b#_\b8_\b8_\b8_\b0\n@@ -5487,38 +5486,38 @@\n [\b[s\bsq\bql\bli\bit\bte\be]\b] [\b[u\bus\bse\bec\bca\bas\bse\be]\b] _\b\u00b6\n The SQLite dialect now supports UPDATE..FROM syntax, for UPDATE statements that\n may refer to additional tables within the WHERE criteria of the statement\n without the need to use subqueries. This syntax is invoked automatically when\n using the _\bU_\bp_\bd_\ba_\bt_\be construct when more than one table or other entity or\n selectable is used.\n References: _\b#_\b7_\b1_\b8_\b5\n-[\b[s\bsq\bql\bli\bit\bte\be]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n-The SQLite dialect now defaults to _\bQ_\bu_\be_\bu_\be_\bP_\bo_\bo_\bl when a file based database is\n-used. This is set along with setting the check_same_thread parameter to False.\n-It has been observed that the previous approach of defaulting to _\bN_\bu_\bl_\bl_\bP_\bo_\bo_\bl,\n-which does not hold onto database connections after they are released, did in\n-fact have a measurable negative performance impact. As always, the pool class\n-is customizable via the _\bc_\br_\be_\ba_\bt_\be_\b__\be_\bn_\bg_\bi_\bn_\be_\b._\bp_\bo_\bo_\bl_\bc_\bl_\ba_\bs_\bs parameter.\n-See also\n-_\bT_\bh_\be_\b _\bS_\bQ_\bL_\bi_\bt_\be_\b _\bd_\bi_\ba_\bl_\be_\bc_\bt_\b _\bu_\bs_\be_\bs_\b _\bQ_\bu_\be_\bu_\be_\bP_\bo_\bo_\bl_\b _\bf_\bo_\br_\b _\bf_\bi_\bl_\be_\b-_\bb_\ba_\bs_\be_\bd_\b _\bd_\ba_\bt_\ba_\bb_\ba_\bs_\be_\bs\n-References: _\b#_\b7_\b4_\b9_\b0\n [\b[s\bsq\bql\bli\bit\bte\be]\b] [\b[b\bbu\bug\bg]\b] _\b\u00b6\n Removed the warning that emits from the _\bN_\bu_\bm_\be_\br_\bi_\bc type about DBAPIs not\n supporting Decimal values natively. This warning was oriented towards SQLite,\n which does not have any real way without additional extensions or workarounds\n of handling precision numeric values more than 15 significant digits as it only\n uses floating point math to represent numbers. As this is a known and\n documented limitation in SQLite itself, and not a quirk of the pysqlite driver,\n there\u2019s no need for SQLAlchemy to warn for this. The change does not otherwise\n modify how precision numerics are handled. Values can continue to be handled as\n Decimal() or float() as configured with the _\bN_\bu_\bm_\be_\br_\bi_\bc, _\bF_\bl_\bo_\ba_\bt , and related\n datatypes, just without the ability to maintain precision beyond 15 significant\n digits when using SQLite, unless alternate representations such as strings are\n used.\n References: _\b#_\b7_\b2_\b9_\b9\n+[\b[s\bsq\bql\bli\bit\bte\be]\b] [\b[b\bbu\bug\bg]\b] [\b[p\bpe\ber\brf\bfo\bor\brm\bma\ban\bnc\bce\be]\b] _\b\u00b6\n+The SQLite dialect now defaults to _\bQ_\bu_\be_\bu_\be_\bP_\bo_\bo_\bl when a file based database is\n+used. This is set along with setting the check_same_thread parameter to False.\n+It has been observed that the previous approach of defaulting to _\bN_\bu_\bl_\bl_\bP_\bo_\bo_\bl,\n+which does not hold onto database connections after they are released, did in\n+fact have a measurable negative performance impact. As always, the pool class\n+is customizable via the _\bc_\br_\be_\ba_\bt_\be_\b__\be_\bn_\bg_\bi_\bn_\be_\b._\bp_\bo_\bo_\bl_\bc_\bl_\ba_\bs_\bs parameter.\n+See also\n+_\bT_\bh_\be_\b _\bS_\bQ_\bL_\bi_\bt_\be_\b _\bd_\bi_\ba_\bl_\be_\bc_\bt_\b _\bu_\bs_\be_\bs_\b _\bQ_\bu_\be_\bu_\be_\bP_\bo_\bo_\bl_\b _\bf_\bo_\br_\b _\bf_\bi_\bl_\be_\b-_\bb_\ba_\bs_\be_\bd_\b _\bd_\ba_\bt_\ba_\bb_\ba_\bs_\be_\bs\n+References: _\b#_\b7_\b4_\b9_\b0\n *\b**\b**\b**\b* m\bms\bss\bsq\bql\bl_\b?\b\u00b6 *\b**\b**\b**\b*\n * [\b[m\bms\bss\bsq\bql\bl]\b] [\b[u\bus\bse\bec\bca\bas\bse\be]\b] _\b\u00b6\n Implemented reflection of the \u201cclustered index\u201d flag mssql_clustered for\n the SQL Server dialect. Pull request courtesy John Lennox.\n References: _\b#_\b8_\b2_\b8_\b8\n [\b[m\bms\bss\bsq\bql\bl]\b] [\b[u\bus\bse\bec\bca\bas\bse\be]\b] _\b\u00b6\n Added support table and column comments on MSSQL when creating a table. Added\n"}]}, {"source1": "./usr/share/doc/python-sqlalchemy-doc/html/orm/examples.html", "source2": "./usr/share/doc/python-sqlalchemy-doc/html/orm/examples.html", "comments": ["Ordering differences only"], "unified_diff": "@@ -319,28 +319,28 @@\n
\n

\n
\n
\n

Asyncio Integration\u00b6

\n

Examples illustrating the asyncio engine feature of SQLAlchemy.

\n

Listing of files:

    \n-
  • basic.py - Illustrates the asyncio engine / connection interface.

    \n-

  • \n-
  • greenlet_orm.py - Illustrates use of the sqlalchemy.ext.asyncio.AsyncSession object\n-for asynchronous ORM use, including the optional run_sync() method.

    \n+
  • async_orm_writeonly.py - Illustrates using write only relationships for simpler handling\n+of ORM collections under asyncio.

    \n

  • \n
  • async_orm.py - Illustrates use of the sqlalchemy.ext.asyncio.AsyncSession object\n for asynchronous ORM use.

    \n

  • \n
  • gather_orm_statements.py - Illustrates how to run many statements concurrently using asyncio.gather()\n along many asyncio database connections, merging ORM results into a single\n AsyncSession.

    \n

  • \n-
  • async_orm_writeonly.py - Illustrates using write only relationships for simpler handling\n-of ORM collections under asyncio.

    \n+
  • greenlet_orm.py - Illustrates use of the sqlalchemy.ext.asyncio.AsyncSession object\n+for asynchronous ORM use, including the optional run_sync() method.

    \n+

  • \n+
  • basic.py - Illustrates the asyncio engine / connection interface.

    \n

  • \n
\n

\n
\n
\n

Directed Graphs\u00b6

\n

An example of persistence for a directed graph structure. The\n@@ -385,23 +385,23 @@\n

  • generic_fk.py - Illustrates a so-called \u201cgeneric foreign key\u201d, in a similar fashion\n to that of popular frameworks such as Django, ROR, etc. This\n approach bypasses standard referential integrity\n practices, in that the \u201cforeign key\u201d column is not actually\n constrained to refer to any particular table; instead,\n in-application logic is used to determine which table is referenced.

    \n

  • \n-
  • table_per_related.py - Illustrates a generic association which persists association\n-objects within individual tables, each one generated to persist\n-those objects on behalf of a particular parent class.

    \n-

  • \n
  • table_per_association.py - Illustrates a mixin which provides a generic association\n via a individually generated association tables for each parent class.\n The associated objects themselves are persisted in a single table\n shared among all parents.

    \n

  • \n+
  • table_per_related.py - Illustrates a generic association which persists association\n+objects within individual tables, each one generated to persist\n+those objects on behalf of a particular parent class.

    \n+

  • \n
  • discriminator_on_association.py - Illustrates a mixin which provides a generic association\n using a single target table and a single association table,\n referred to by all parent tables. The association table\n contains a \u201cdiscriminator\u201d column which determines what type of\n parent object associates to each particular row in the association\n table.

    \n

  • \n@@ -477,32 +477,32 @@\n \n
    \n

    File Listing\u00b6

    \n

    Listing of files:

      \n-
    • bulk_updates.py - This series of tests will illustrate different ways to UPDATE a large number\n-of rows in bulk (under construction! there\u2019s just one test at the moment)

      \n+
    • large_resultsets.py - In this series of tests, we are looking at time to load a large number\n+of very small and simple rows.

      \n+

    • \n+
    • __main__.py - Allows the examples/performance package to be run as a script.

      \n

    • \n
    • bulk_inserts.py - This series of tests illustrates different ways to INSERT a large number\n of rows in bulk.

      \n

    • \n-
    • single_inserts.py - In this series of tests, we\u2019re looking at a method that inserts a row\n-within a distinct transaction, and afterwards returns to essentially a\n-\u201cclosed\u201d state. This would be analogous to an API call that starts up\n-a database connection, inserts the row, commits and closes.

      \n+
    • bulk_updates.py - This series of tests will illustrate different ways to UPDATE a large number\n+of rows in bulk (under construction! there\u2019s just one test at the moment)

      \n

    • \n
    • short_selects.py - This series of tests illustrates different ways to SELECT a single\n record by primary key

      \n

    • \n-
    • __main__.py - Allows the examples/performance package to be run as a script.

      \n-

    • \n-
    • large_resultsets.py - In this series of tests, we are looking at time to load a large number\n-of very small and simple rows.

      \n+
    • single_inserts.py - In this series of tests, we\u2019re looking at a method that inserts a row\n+within a distinct transaction, and afterwards returns to essentially a\n+\u201cclosed\u201d state. This would be analogous to an API call that starts up\n+a database connection, inserts the row, commits and closes.

      \n

    • \n
    \n

    \n
    \n
    \n

    Running all tests with time\u00b6

    \n

    This is the default form of run:

    \n@@ -751,22 +751,22 @@\n

    Several examples that illustrate the technique of intercepting changes\n that would be first interpreted as an UPDATE on a row, and instead turning\n it into an INSERT of a new row, leaving the previous row intact as\n a historical version.

    \n

    Compare to the Versioning with a History Table example which writes a\n history row to a separate history table.

    \n

    Listing of files:

      \n-
    • versioned_map.py - A variant of the versioned_rows example built around the\n-concept of a \u201cvertical table\u201d structure, like those illustrated in\n-Vertical Attribute Mapping examples.

      \n-

    • \n
    • versioned_rows.py - Illustrates a method to intercept changes on objects, turning\n an UPDATE statement on a single row into an INSERT statement, so that a new\n row is inserted with the new data, keeping the old row intact.

      \n

    • \n+
    • versioned_map.py - A variant of the versioned_rows example built around the\n+concept of a \u201cvertical table\u201d structure, like those illustrated in\n+Vertical Attribute Mapping examples.

      \n+

    • \n
    • versioned_rows_w_versionid.py - Illustrates a method to intercept changes on objects, turning\n an UPDATE statement on a single row into an INSERT statement, so that a new\n row is inserted with the new data, keeping the old row intact.

      \n

    • \n
    • versioned_update_old_row.py - Illustrates the same UPDATE into INSERT technique of versioned_rows.py,\n but also emits an UPDATE on the old row to affect a change in timestamp.\n Also includes a SessionEvents.do_orm_execute() hook to limit queries\n@@ -815,41 +815,41 @@\n

      \n

      Inheritance Mapping Recipes\u00b6

      \n
      \n

      Basic Inheritance Mappings\u00b6

      \n

      Working examples of single-table, joined-table, and concrete-table\n inheritance as described in Mapping Class Inheritance Hierarchies.

      \n

      Listing of files:

        \n-
      • joined.py - Joined-table (table-per-subclass) inheritance example.

        \n+
      • concrete.py - Concrete-table (table-per-class) inheritance example.

        \n

      • \n
      • single.py - Single-table (table-per-hierarchy) inheritance example.

        \n

      • \n-
      • concrete.py - Concrete-table (table-per-class) inheritance example.

        \n+
      • joined.py - Joined-table (table-per-subclass) inheritance example.

        \n

      • \n
      \n

      \n
      \n
      \n
      \n

      Special APIs\u00b6

      \n
      \n

      Attribute Instrumentation\u00b6

      \n

      Examples illustrating modifications to SQLAlchemy\u2019s attribute management\n system.

      \n

      Listing of files:

      \n

      \n
      \n
      \n

      Horizontal Sharding\u00b6

      \n

      A basic example of using the SQLAlchemy Sharding API.\n@@ -879,24 +879,24 @@\n

      The construction of generic sharding routines is an ambitious approach\n to the issue of organizing instances among multiple databases. For a\n more plain-spoken alternative, the \u201cdistinct entity\u201d approach\n is a simple method of assigning objects to different tables (and potentially\n database nodes) in an explicit way - described on the wiki at\n EntityName.

      \n

      Listing of files:

        \n-
      • asyncio.py - Illustrates sharding API used with asyncio.

        \n-

      • \n
      • separate_tables.py - Illustrates sharding using a single SQLite database, that will however\n have multiple tables using a naming convention.

        \n

      • \n+
      • separate_databases.py - Illustrates sharding using distinct SQLite databases.

        \n+

      • \n+
      • asyncio.py - Illustrates sharding API used with asyncio.

        \n+

      • \n
      • separate_schema_translates.py - Illustrates sharding using a single database with multiple schemas,\n where a different \u201cschema_translates_map\u201d can be used for each shard.

        \n

      • \n-
      • separate_databases.py - Illustrates sharding using distinct SQLite databases.

        \n-

      • \n
      \n

      \n
      \n
      \n
      \n

      Extending the ORM\u00b6

      \n
      \n", "details": [{"source1": "html2text {}", "source2": "html2text {}", "unified_diff": "@@ -109,24 +109,24 @@\n values, which conceal the underlying mapped classes.\n _\bb_\ba_\bs_\bi_\bc_\b__\ba_\bs_\bs_\bo_\bc_\bi_\ba_\bt_\bi_\bo_\bn_\b._\bp_\by - Illustrate a many-to-many relationship between an\n \u201cOrder\u201d and a collection of \u201cItem\u201d objects, associating a purchase price with\n each via an association object called \u201cOrderItem\u201d\n *\b**\b**\b**\b* A\bAs\bsy\byn\bnc\bci\bio\bo I\bIn\bnt\bte\beg\bgr\bra\bat\bti\bio\bon\bn_\b?\b\u00b6 *\b**\b**\b**\b*\n Examples illustrating the asyncio engine feature of SQLAlchemy.\n Listing of files:\n- * _\bb_\ba_\bs_\bi_\bc_\b._\bp_\by - Illustrates the asyncio engine / connection interface.\n-_\bg_\br_\be_\be_\bn_\bl_\be_\bt_\b__\bo_\br_\bm_\b._\bp_\by - Illustrates use of the sqlalchemy.ext.asyncio.AsyncSession\n-object for asynchronous ORM use, including the optional run_sync() method.\n+ * _\ba_\bs_\by_\bn_\bc_\b__\bo_\br_\bm_\b__\bw_\br_\bi_\bt_\be_\bo_\bn_\bl_\by_\b._\bp_\by - Illustrates using w\bwr\bri\bit\bte\be o\bon\bnl\bly\by r\bre\bel\bla\bat\bti\bio\bon\bns\bsh\bhi\bip\bps\bs for\n+ simpler handling of ORM collections under asyncio.\n _\ba_\bs_\by_\bn_\bc_\b__\bo_\br_\bm_\b._\bp_\by - Illustrates use of the sqlalchemy.ext.asyncio.AsyncSession\n object for asynchronous ORM use.\n _\bg_\ba_\bt_\bh_\be_\br_\b__\bo_\br_\bm_\b__\bs_\bt_\ba_\bt_\be_\bm_\be_\bn_\bt_\bs_\b._\bp_\by - Illustrates how to run many statements concurrently\n using asyncio.gather() along many asyncio database connections, merging ORM\n results into a single AsyncSession.\n-_\ba_\bs_\by_\bn_\bc_\b__\bo_\br_\bm_\b__\bw_\br_\bi_\bt_\be_\bo_\bn_\bl_\by_\b._\bp_\by - Illustrates using w\bwr\bri\bit\bte\be o\bon\bnl\bly\by r\bre\bel\bla\bat\bti\bio\bon\bns\bsh\bhi\bip\bps\bs for simpler\n-handling of ORM collections under asyncio.\n+_\bg_\br_\be_\be_\bn_\bl_\be_\bt_\b__\bo_\br_\bm_\b._\bp_\by - Illustrates use of the sqlalchemy.ext.asyncio.AsyncSession\n+object for asynchronous ORM use, including the optional run_sync() method.\n+_\bb_\ba_\bs_\bi_\bc_\b._\bp_\by - Illustrates the asyncio engine / connection interface.\n *\b**\b**\b**\b* D\bDi\bir\bre\bec\bct\bte\bed\bd G\bGr\bra\bap\bph\bhs\bs_\b?\b\u00b6 *\b**\b**\b**\b*\n An example of persistence for a directed graph structure. The graph is stored\n as a collection of edges, each referencing both a \u201clower\u201d and an \u201cupper\u201d node\n in a table of nodes. Basic persistence and querying for lower- and upper-\n neighbors are illustrated:\n n2 = Node(2)\n n5 = Node(5)\n@@ -154,21 +154,21 @@\n Listing of files:\n * _\bg_\be_\bn_\be_\br_\bi_\bc_\b__\bf_\bk_\b._\bp_\by - Illustrates a so-called \u201cgeneric foreign key\u201d, in a\n similar fashion to that of popular frameworks such as Django, ROR, etc.\n This approach bypasses standard referential integrity practices, in that\n the \u201cforeign key\u201d column is not actually constrained to refer to any\n particular table; instead, in-application logic is used to determine\n which table is referenced.\n-_\bt_\ba_\bb_\bl_\be_\b__\bp_\be_\br_\b__\br_\be_\bl_\ba_\bt_\be_\bd_\b._\bp_\by - Illustrates a generic association which persists\n-association objects within individual tables, each one generated to persist\n-those objects on behalf of a particular parent class.\n _\bt_\ba_\bb_\bl_\be_\b__\bp_\be_\br_\b__\ba_\bs_\bs_\bo_\bc_\bi_\ba_\bt_\bi_\bo_\bn_\b._\bp_\by - Illustrates a mixin which provides a generic\n association via a individually generated association tables for each parent\n class. The associated objects themselves are persisted in a single table shared\n among all parents.\n+_\bt_\ba_\bb_\bl_\be_\b__\bp_\be_\br_\b__\br_\be_\bl_\ba_\bt_\be_\bd_\b._\bp_\by - Illustrates a generic association which persists\n+association objects within individual tables, each one generated to persist\n+those objects on behalf of a particular parent class.\n _\bd_\bi_\bs_\bc_\br_\bi_\bm_\bi_\bn_\ba_\bt_\bo_\br_\b__\bo_\bn_\b__\ba_\bs_\bs_\bo_\bc_\bi_\ba_\bt_\bi_\bo_\bn_\b._\bp_\by - Illustrates a mixin which provides a generic\n association using a single target table and a single association table,\n referred to by all parent tables. The association table contains a\n \u201cdiscriminator\u201d column which determines what type of parent object associates\n to each particular row in the association table.\n *\b**\b**\b**\b* M\bMa\bat\bte\ber\bri\bia\bal\bli\biz\bze\bed\bd P\bPa\bat\bth\bhs\bs_\b?\b\u00b6 *\b**\b**\b**\b*\n Illustrates the \u201cmaterialized paths\u201d pattern for hierarchical data using the\n@@ -221,28 +221,28 @@\n $ python -m examples.performance bulk_inserts \\\n --dburl mysql+mysqldb://scott:tiger@localhost/test \\\n --profile --num 1000\n See also\n _\bH_\bo_\bw_\b _\bc_\ba_\bn_\b _\bI_\b _\bp_\br_\bo_\bf_\bi_\bl_\be_\b _\ba_\b _\bS_\bQ_\bL_\bA_\bl_\bc_\bh_\be_\bm_\by_\b _\bp_\bo_\bw_\be_\br_\be_\bd_\b _\ba_\bp_\bp_\bl_\bi_\bc_\ba_\bt_\bi_\bo_\bn_\b?\n *\b**\b**\b* F\bFi\bil\ble\be L\bLi\bis\bst\bti\bin\bng\bg_\b?\b\u00b6 *\b**\b**\b*\n Listing of files:\n- * _\bb_\bu_\bl_\bk_\b__\bu_\bp_\bd_\ba_\bt_\be_\bs_\b._\bp_\by - This series of tests will illustrate different ways to\n- UPDATE a large number of rows in bulk (under construction! there\u2019s just\n- one test at the moment)\n+ * _\bl_\ba_\br_\bg_\be_\b__\br_\be_\bs_\bu_\bl_\bt_\bs_\be_\bt_\bs_\b._\bp_\by - In this series of tests, we are looking at time to\n+ load a large number of very small and simple rows.\n+_\b__\b__\bm_\ba_\bi_\bn_\b__\b__\b._\bp_\by - Allows the examples/performance package to be run as a script.\n _\bb_\bu_\bl_\bk_\b__\bi_\bn_\bs_\be_\br_\bt_\bs_\b._\bp_\by - This series of tests illustrates different ways to INSERT a\n large number of rows in bulk.\n+_\bb_\bu_\bl_\bk_\b__\bu_\bp_\bd_\ba_\bt_\be_\bs_\b._\bp_\by - This series of tests will illustrate different ways to UPDATE\n+a large number of rows in bulk (under construction! there\u2019s just one test at\n+the moment)\n+_\bs_\bh_\bo_\br_\bt_\b__\bs_\be_\bl_\be_\bc_\bt_\bs_\b._\bp_\by - This series of tests illustrates different ways to SELECT a\n+single record by primary key\n _\bs_\bi_\bn_\bg_\bl_\be_\b__\bi_\bn_\bs_\be_\br_\bt_\bs_\b._\bp_\by - In this series of tests, we\u2019re looking at a method that\n inserts a row within a distinct transaction, and afterwards returns to\n essentially a \u201cclosed\u201d state. This would be analogous to an API call that\n starts up a database connection, inserts the row, commits and closes.\n-_\bs_\bh_\bo_\br_\bt_\b__\bs_\be_\bl_\be_\bc_\bt_\bs_\b._\bp_\by - This series of tests illustrates different ways to SELECT a\n-single record by primary key\n-_\b__\b__\bm_\ba_\bi_\bn_\b__\b__\b._\bp_\by - Allows the examples/performance package to be run as a script.\n-_\bl_\ba_\br_\bg_\be_\b__\br_\be_\bs_\bu_\bl_\bt_\bs_\be_\bt_\bs_\b._\bp_\by - In this series of tests, we are looking at time to load a\n-large number of very small and simple rows.\n *\b**\b**\b* R\bRu\bun\bnn\bni\bin\bng\bg a\bal\bll\bl t\bte\bes\bst\bts\bs w\bwi\bit\bth\bh t\bti\bim\bme\be_\b?\b\u00b6 *\b**\b**\b*\n This is the default form of run:\n $ python -m examples.performance single_inserts\n Tests to run: test_orm_commit, test_bulk_save,\n test_bulk_insert_dictionaries, test_core,\n test_core_query_caching, test_dbapi_raw_w_connect,\n test_dbapi_raw_w_pool\n@@ -468,20 +468,20 @@\n Several examples that illustrate the technique of intercepting changes that\n would be first interpreted as an UPDATE on a row, and instead turning it into\n an INSERT of a new row, leaving the previous row intact as a historical\n version.\n Compare to the _\bV_\be_\br_\bs_\bi_\bo_\bn_\bi_\bn_\bg_\b _\bw_\bi_\bt_\bh_\b _\ba_\b _\bH_\bi_\bs_\bt_\bo_\br_\by_\b _\bT_\ba_\bb_\bl_\be example which writes a history\n row to a separate history table.\n Listing of files:\n- * _\bv_\be_\br_\bs_\bi_\bo_\bn_\be_\bd_\b__\bm_\ba_\bp_\b._\bp_\by - A variant of the versioned_rows example built around\n- the concept of a \u201cvertical table\u201d structure, like those illustrated in\n- _\bV_\be_\br_\bt_\bi_\bc_\ba_\bl_\b _\bA_\bt_\bt_\br_\bi_\bb_\bu_\bt_\be_\b _\bM_\ba_\bp_\bp_\bi_\bn_\bg examples.\n-_\bv_\be_\br_\bs_\bi_\bo_\bn_\be_\bd_\b__\br_\bo_\bw_\bs_\b._\bp_\by - Illustrates a method to intercept changes on objects,\n-turning an UPDATE statement on a single row into an INSERT statement, so that a\n-new row is inserted with the new data, keeping the old row intact.\n+ * _\bv_\be_\br_\bs_\bi_\bo_\bn_\be_\bd_\b__\br_\bo_\bw_\bs_\b._\bp_\by - Illustrates a method to intercept changes on objects,\n+ turning an UPDATE statement on a single row into an INSERT statement, so\n+ that a new row is inserted with the new data, keeping the old row intact.\n+_\bv_\be_\br_\bs_\bi_\bo_\bn_\be_\bd_\b__\bm_\ba_\bp_\b._\bp_\by - A variant of the versioned_rows example built around the\n+concept of a \u201cvertical table\u201d structure, like those illustrated in _\bV_\be_\br_\bt_\bi_\bc_\ba_\bl\n+_\bA_\bt_\bt_\br_\bi_\bb_\bu_\bt_\be_\b _\bM_\ba_\bp_\bp_\bi_\bn_\bg examples.\n _\bv_\be_\br_\bs_\bi_\bo_\bn_\be_\bd_\b__\br_\bo_\bw_\bs_\b__\bw_\b__\bv_\be_\br_\bs_\bi_\bo_\bn_\bi_\bd_\b._\bp_\by - Illustrates a method to intercept changes on\n objects, turning an UPDATE statement on a single row into an INSERT statement,\n so that a new row is inserted with the new data, keeping the old row intact.\n _\bv_\be_\br_\bs_\bi_\bo_\bn_\be_\bd_\b__\bu_\bp_\bd_\ba_\bt_\be_\b__\bo_\bl_\bd_\b__\br_\bo_\bw_\b._\bp_\by - Illustrates the same UPDATE into INSERT technique\n of versioned_rows.py, but also emits an UPDATE on the o\bol\bld\bd row to affect a\n change in timestamp. Also includes a _\bS_\be_\bs_\bs_\bi_\bo_\bn_\bE_\bv_\be_\bn_\bt_\bs_\b._\bd_\bo_\b__\bo_\br_\bm_\b__\be_\bx_\be_\bc_\bu_\bt_\be_\b(_\b) hook to\n limit queries to only the most recent version.\n@@ -515,30 +515,29 @@\n _\bd_\bi_\bc_\bt_\bl_\bi_\bk_\be_\b-_\bp_\bo_\bl_\by_\bm_\bo_\br_\bp_\bh_\bi_\bc_\b._\bp_\by - Mapping a polymorphic-valued vertical table as a\n dictionary.\n *\b**\b**\b**\b**\b* I\bIn\bnh\bhe\ber\bri\bit\bta\ban\bnc\bce\be M\bMa\bap\bpp\bpi\bin\bng\bg R\bRe\bec\bci\bip\bpe\bes\bs_\b?\b\u00b6 *\b**\b**\b**\b**\b*\n *\b**\b**\b**\b* B\bBa\bas\bsi\bic\bc I\bIn\bnh\bhe\ber\bri\bit\bta\ban\bnc\bce\be M\bMa\bap\bpp\bpi\bin\bng\bgs\bs_\b?\b\u00b6 *\b**\b**\b**\b*\n Working examples of single-table, joined-table, and concrete-table inheritance\n as described in _\bM_\ba_\bp_\bp_\bi_\bn_\bg_\b _\bC_\bl_\ba_\bs_\bs_\b _\bI_\bn_\bh_\be_\br_\bi_\bt_\ba_\bn_\bc_\be_\b _\bH_\bi_\be_\br_\ba_\br_\bc_\bh_\bi_\be_\bs.\n Listing of files:\n- * _\bj_\bo_\bi_\bn_\be_\bd_\b._\bp_\by - Joined-table (table-per-subclass) inheritance example.\n+ * _\bc_\bo_\bn_\bc_\br_\be_\bt_\be_\b._\bp_\by - Concrete-table (table-per-class) inheritance example.\n _\bs_\bi_\bn_\bg_\bl_\be_\b._\bp_\by - Single-table (table-per-hierarchy) inheritance example.\n-_\bc_\bo_\bn_\bc_\br_\be_\bt_\be_\b._\bp_\by - Concrete-table (table-per-class) inheritance example.\n+_\bj_\bo_\bi_\bn_\be_\bd_\b._\bp_\by - Joined-table (table-per-subclass) inheritance example.\n *\b**\b**\b**\b**\b* S\bSp\bpe\bec\bci\bia\bal\bl A\bAP\bPI\bIs\bs_\b?\b\u00b6 *\b**\b**\b**\b**\b*\n *\b**\b**\b**\b* A\bAt\btt\btr\bri\bib\bbu\but\bte\be I\bIn\bns\bst\btr\bru\bum\bme\ben\bnt\bta\bat\bti\bio\bon\bn_\b?\b\u00b6 *\b**\b**\b**\b*\n Examples illustrating modifications to SQLAlchemy\u2019s attribute management\n system.\n Listing of files:\n- * _\ba_\bc_\bt_\bi_\bv_\be_\b__\bc_\bo_\bl_\bu_\bm_\bn_\b__\bd_\be_\bf_\ba_\bu_\bl_\bt_\bs_\b._\bp_\by - Illustrates use of the\n- _\bA_\bt_\bt_\br_\bi_\bb_\bu_\bt_\be_\bE_\bv_\be_\bn_\bt_\bs_\b._\bi_\bn_\bi_\bt_\b__\bs_\bc_\ba_\bl_\ba_\br_\b(_\b) event, in conjunction with Core column\n- defaults to provide ORM objects that automatically produce the default\n- value when an un-set attribute is accessed.\n+ * _\bl_\bi_\bs_\bt_\be_\bn_\b__\bf_\bo_\br_\b__\be_\bv_\be_\bn_\bt_\bs_\b._\bp_\by - Illustrates how to attach events to all\n+ instrumented attributes and listen for change events.\n _\bc_\bu_\bs_\bt_\bo_\bm_\b__\bm_\ba_\bn_\ba_\bg_\be_\bm_\be_\bn_\bt_\b._\bp_\by - Illustrates customized class instrumentation, using the\n _\bs_\bq_\bl_\ba_\bl_\bc_\bh_\be_\bm_\by_\b._\be_\bx_\bt_\b._\bi_\bn_\bs_\bt_\br_\bu_\bm_\be_\bn_\bt_\ba_\bt_\bi_\bo_\bn extension package.\n-_\bl_\bi_\bs_\bt_\be_\bn_\b__\bf_\bo_\br_\b__\be_\bv_\be_\bn_\bt_\bs_\b._\bp_\by - Illustrates how to attach events to all instrumented\n-attributes and listen for change events.\n+_\ba_\bc_\bt_\bi_\bv_\be_\b__\bc_\bo_\bl_\bu_\bm_\bn_\b__\bd_\be_\bf_\ba_\bu_\bl_\bt_\bs_\b._\bp_\by - Illustrates use of the _\bA_\bt_\bt_\br_\bi_\bb_\bu_\bt_\be_\bE_\bv_\be_\bn_\bt_\bs_\b._\bi_\bn_\bi_\bt_\b__\bs_\bc_\ba_\bl_\ba_\br\n+_\b(_\b) event, in conjunction with Core column defaults to provide ORM objects that\n+automatically produce the default value when an un-set attribute is accessed.\n *\b**\b**\b**\b* H\bHo\bor\bri\biz\bzo\bon\bnt\bta\bal\bl S\bSh\bha\bar\brd\bdi\bin\bng\bg_\b?\b\u00b6 *\b**\b**\b**\b*\n A basic example of using the SQLAlchemy Sharding API. Sharding refers to\n horizontally scaling data across multiple databases.\n The basic components of a \u201csharded\u201d mapping are:\n * multiple _\bE_\bn_\bg_\bi_\bn_\be instances, each assigned a \u201cshard id\u201d. These _\bE_\bn_\bg_\bi_\bn_\be\n instances may refer to different databases, or different schemas /\n accounts within the same database, or they can even be differentiated\n@@ -559,21 +558,21 @@\n attempt to determine a single shard being requested.\n The construction of generic sharding routines is an ambitious approach to the\n issue of organizing instances among multiple databases. For a more plain-spoken\n alternative, the \u201cdistinct entity\u201d approach is a simple method of assigning\n objects to different tables (and potentially database nodes) in an explicit way\n - described on the wiki at _\bE_\bn_\bt_\bi_\bt_\by_\bN_\ba_\bm_\be.\n Listing of files:\n- * _\ba_\bs_\by_\bn_\bc_\bi_\bo_\b._\bp_\by - Illustrates sharding API used with asyncio.\n-_\bs_\be_\bp_\ba_\br_\ba_\bt_\be_\b__\bt_\ba_\bb_\bl_\be_\bs_\b._\bp_\by - Illustrates sharding using a single SQLite database, that\n-will however have multiple tables using a naming convention.\n+ * _\bs_\be_\bp_\ba_\br_\ba_\bt_\be_\b__\bt_\ba_\bb_\bl_\be_\bs_\b._\bp_\by - Illustrates sharding using a single SQLite database,\n+ that will however have multiple tables using a naming convention.\n+_\bs_\be_\bp_\ba_\br_\ba_\bt_\be_\b__\bd_\ba_\bt_\ba_\bb_\ba_\bs_\be_\bs_\b._\bp_\by - Illustrates sharding using distinct SQLite databases.\n+_\ba_\bs_\by_\bn_\bc_\bi_\bo_\b._\bp_\by - Illustrates sharding API used with asyncio.\n _\bs_\be_\bp_\ba_\br_\ba_\bt_\be_\b__\bs_\bc_\bh_\be_\bm_\ba_\b__\bt_\br_\ba_\bn_\bs_\bl_\ba_\bt_\be_\bs_\b._\bp_\by - Illustrates sharding using a single database\n with multiple schemas, where a different \u201cschema_translates_map\u201d can be used\n for each shard.\n-_\bs_\be_\bp_\ba_\br_\ba_\bt_\be_\b__\bd_\ba_\bt_\ba_\bb_\ba_\bs_\be_\bs_\b._\bp_\by - Illustrates sharding using distinct SQLite databases.\n *\b**\b**\b**\b**\b* E\bEx\bxt\bte\ben\bnd\bdi\bin\bng\bg t\bth\bhe\be O\bOR\bRM\bM_\b?\b\u00b6 *\b**\b**\b**\b**\b*\n *\b**\b**\b**\b* O\bOR\bRM\bM Q\bQu\bue\ber\bry\by E\bEv\bve\ben\bnt\bts\bs_\b?\b\u00b6 *\b**\b**\b**\b*\n Recipes which illustrate augmentation of ORM SELECT behavior as used by\n _\bS_\be_\bs_\bs_\bi_\bo_\bn_\b._\be_\bx_\be_\bc_\bu_\bt_\be_\b(_\b) with _\b2_\b._\b0_\b _\bs_\bt_\by_\bl_\be use of _\bs_\be_\bl_\be_\bc_\bt_\b(_\b), as well as the _\b1_\b._\bx_\b _\bs_\bt_\by_\bl_\be\n _\bQ_\bu_\be_\br_\by object.\n Examples include demonstrations of the _\bw_\bi_\bt_\bh_\b__\bl_\bo_\ba_\bd_\be_\br_\b__\bc_\br_\bi_\bt_\be_\br_\bi_\ba_\b(_\b) option as well as\n the _\bS_\be_\bs_\bs_\bi_\bo_\bn_\bE_\bv_\be_\bn_\bt_\bs_\b._\bd_\bo_\b__\bo_\br_\bm_\b__\be_\bx_\be_\bc_\bu_\bt_\be_\b(_\b) hook.\n"}]}]}]}]}]}