Message ID | 87wn629ggg.fsf@meer.lwn.net |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp5350667wrt; Wed, 4 Jan 2023 12:53:40 -0800 (PST) X-Google-Smtp-Source: AMrXdXuhmIJ3zs9fzeLr+oPlrORGIEwVcS/Gd5d05+WSa6Z2WnCnwXen7NKeAiwEPZMfz5x/Tt9X X-Received: by 2002:a05:6a20:4b03:b0:ad:79bb:7869 with SMTP id fp3-20020a056a204b0300b000ad79bb7869mr54603429pzb.56.1672865619861; Wed, 04 Jan 2023 12:53:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672865619; cv=none; d=google.com; s=arc-20160816; b=IYX/SotXW8j1tI6ax680mcVZhRHZLCw2nqgyqxxxeeiDyCczpSH7nRQJ7oZ8k4A87I q0K3XGaLkkas2vSOdRf4H/nyp4vH2rTNWU7fIl5TYO4TVlopsvrUHPotmt61TsnOP1tB 9Tl1rCD5oOx3L3f6McvTlDHu1nyStU12L3O8xW622K3XMlUlix+FTNPTBjvbe3m7K2PD KmPmdwbDbNEQNIwYt76NjaIgdeZ68xUwDnaMUiYRejD/GuGQ+21cWiKhLv2J0Nd3I089 o2deHAMRQaNJoRSqPVCXiKWjXNQNhBEzSyAwP2VGoodgRmFG1e6dacFn7Kj0GOfufuob 8Lkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature:dkim-filter; bh=GnZ2wQ3AWKZiOEmZO0krSMoKPBrB3gSfeQb7VWBa5ZU=; b=wyZi9aMW050Zwi0vLL0IYlkMKq4nIagvmGbrngQbSbCHvrtj/b2E0xj91ARoNw+CT6 vaXNAF8LYWwntMTC+V7+OVE+IkIrvluL310IUmVC9nKUnGPzarm2yj9CbLVcuAPk6vct NE57tgJuwHstJu4+AaJ7Vt8cPBRNMnhXhfODjhAdCMCGNEBgd+fF43UldtgiXm+8Swot 6J5vwN+xY0Ut1DowXrSILGDpcldO5lOsOjKXU+SoZyLnO3PHLuVc1GdQ1AzOMvwlN1Ck d8FQcuMv8I2eFB8obnvTBSr4MvlpUCv4vitzR7j8q3hi+eQ6BxjFNp9FQvutaHzUwEs1 vJIg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@lwn.net header.s=20201203 header.b=tizL1noI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w21-20020a1709029a9500b00192b150cf87si13860198plp.153.2023.01.04.12.53.27; Wed, 04 Jan 2023 12:53:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@lwn.net header.s=20201203 header.b=tizL1noI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240083AbjADUpk (ORCPT <rfc822;tmhikaru@gmail.com> + 99 others); Wed, 4 Jan 2023 15:45:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229464AbjADUph (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 4 Jan 2023 15:45:37 -0500 Received: from ms.lwn.net (ms.lwn.net [45.79.88.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A426B1BE81; Wed, 4 Jan 2023 12:45:36 -0800 (PST) Received: from localhost (unknown [IPv6:2601:281:8300:73::5f6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ms.lwn.net (Postfix) with ESMTPSA id 3E2C82C5; Wed, 4 Jan 2023 20:45:36 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 ms.lwn.net 3E2C82C5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=lwn.net; s=20201203; t=1672865136; bh=GnZ2wQ3AWKZiOEmZO0krSMoKPBrB3gSfeQb7VWBa5ZU=; h=From:To:Cc:Subject:Date:From; b=tizL1noI+FnnMMXP47i1nQCqajPBMirQO0ouQzHWipL1z4pq8O7zueWaJkysqan1a DPrtQwwiLuhsJAmIW+Nf+jt4mqXugNAQ9PZKw+p4VnsrBHvnjR0I6EKNH6sKyXCFez Idp4CfVEMJqbBIJYxcic3WAs/ophDiJNEGzhqe0rj0SQzI9gUXDuy24EVnUwXly6FC HQ39CsgnXbTlgdRcYzf3NKNN3f7JETGZn+pl+mlddkWJ47q9uyHazg1aHgTFjzpEBz AcqOb5ypSozCXe4lTf8w3HFE/w4ucz43DhIJdTcuVTOyPl0rwmyAiEXeLI8qd4W1yc 6LCkD2hMp1iUg== From: Jonathan Corbet <corbet@lwn.net> To: linux-doc@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Martin =?utf-8?b?TGnFoWth?= <mliska@suse.cz>, Mauro Carvalho Chehab <mchehab@kernel.org> Subject: [PATCH] docs: Fix the docs build with Sphinx 6.0 Date: Wed, 04 Jan 2023 13:45:35 -0700 Message-ID: <87wn629ggg.fsf@meer.lwn.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754126740273008225?= X-GMAIL-MSGID: =?utf-8?q?1754126740273008225?= |
Series |
docs: Fix the docs build with Sphinx 6.0
|
|
Commit Message
Jonathan Corbet
Jan. 4, 2023, 8:45 p.m. UTC
Sphinx 6.0 removed the execfile_() function, which we use as part of the
configuration process. They *did* warn us... Just open-code the
functionality as is done in Sphinx itself.
Tested (using SPHINX_CONF, since this code is only executed with an
alternative config file) on various Sphinx versions from 2.5 through 6.0.
Reported-by: Martin Liška <mliska@suse.cz>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
---
Documentation/sphinx/load_config.py | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
Comments
On Wed, 04 Jan 2023 13:45:35 -0700, Jonathan Corbet wrote: > Sphinx 6.0 removed the execfile_() function, which we use as part of the > configuration process. They *did* warn us... Just open-code the > functionality as is done in Sphinx itself. > > Tested (using SPHINX_CONF, since this code is only executed with an > alternative config file) on various Sphinx versions from 2.5 through 6.0. > > Reported-by: Martin Liška <mliska@suse.cz> > Signed-off-by: Jonathan Corbet <corbet@lwn.net> I have tested full builds of documentation with this change with Sphinx versions 1.7.9, 2.4.5, 3.4.3, 4.5.0, 5.3.0, and 6.0.0. Tested-by: Akira Yokosawa <akiyks@gmail.com> That said, Sphinx 6.0.0 needs much more time and memory than earlier versions. FYI, I needed to limit parallel slot to 2 (make -j2) on a 16GB machine. If you are lucky, -j3 and -j4 might succeed. -j5 or more ended up in OOM situations for me: Comparison of elapsed time and maxresident with -j2: ============== ============ =========== Sphinx version elapsed time maxresident ============== ============ =========== 5.3.0 10:16.81 937660 6.0.0 17:29.07 5292392 ============== ============ =========== Thanks, Akira > --- > Documentation/sphinx/load_config.py | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/Documentation/sphinx/load_config.py b/Documentation/sphinx/load_config.py > index eeb394b39e2c..8b416bfd75ac 100644 > --- a/Documentation/sphinx/load_config.py > +++ b/Documentation/sphinx/load_config.py > @@ -3,7 +3,7 @@ > > import os > import sys > -from sphinx.util.pycompat import execfile_ > +from sphinx.util.osutil import fs_encoding > > # ------------------------------------------------------------------------------ > def loadConfig(namespace): > @@ -48,7 +48,9 @@ def loadConfig(namespace): > sys.stdout.write("load additional sphinx-config: %s\n" % config_file) > config = namespace.copy() > config['__file__'] = config_file > - execfile_(config_file, config) > + with open(config_file, 'rb') as f: > + code = compile(f.read(), fs_encoding, 'exec') > + exec(code, config) > del config['__file__'] > namespace.update(config) > else: > -- > 2.38.1
Em Sat, 7 Jan 2023 14:17:24 +0900 Akira Yokosawa <akiyks@gmail.com> escreveu: > On Wed, 04 Jan 2023 13:45:35 -0700, Jonathan Corbet wrote: > > Sphinx 6.0 removed the execfile_() function, which we use as part of the > > configuration process. They *did* warn us... Just open-code the > > functionality as is done in Sphinx itself. > > > > Tested (using SPHINX_CONF, since this code is only executed with an > > alternative config file) on various Sphinx versions from 2.5 through 6.0. > > > > Reported-by: Martin Liška <mliska@suse.cz> > > Signed-off-by: Jonathan Corbet <corbet@lwn.net> > > I have tested full builds of documentation with this change > with Sphinx versions 1.7.9, 2.4.5, 3.4.3, 4.5.0, 5.3.0, and 6.0.0. > > Tested-by: Akira Yokosawa <akiyks@gmail.com> > > That said, Sphinx 6.0.0 needs much more time and memory than earlier > versions. > > FYI, I needed to limit parallel slot to 2 (make -j2) on a 16GB machine. > If you are lucky, -j3 and -j4 might succeed. -j5 or more ended up in > OOM situations for me: > > Comparison of elapsed time and maxresident with -j2: > > ============== ============ =========== > Sphinx version elapsed time maxresident > ============== ============ =========== > 5.3.0 10:16.81 937660 > 6.0.0 17:29.07 5292392 > ============== ============ =========== From the changelogs: https://www.sphinx-doc.org/en/master/changes.html It seems that 6.1 came with some performance optimizations, in particular: Cache doctrees in the build environment during the writing phase. Make all writing phase tasks support parallel execution. Cache doctrees between the reading and writing phases. It would be nice if you could also test and check elapsed time there too, as I suspect that 6.0 will have a very short usage, as 6.1 was released just a few days after it. Regards, Mauro. > > Thanks, Akira > > > --- > > Documentation/sphinx/load_config.py | 6 ++++-- > > 1 file changed, 4 insertions(+), 2 deletions(-) > > > > diff --git a/Documentation/sphinx/load_config.py b/Documentation/sphinx/load_config.py > > index eeb394b39e2c..8b416bfd75ac 100644 > > --- a/Documentation/sphinx/load_config.py > > +++ b/Documentation/sphinx/load_config.py > > @@ -3,7 +3,7 @@ > > > > import os > > import sys > > -from sphinx.util.pycompat import execfile_ > > +from sphinx.util.osutil import fs_encoding > > > > # ------------------------------------------------------------------------------ > > def loadConfig(namespace): > > @@ -48,7 +48,9 @@ def loadConfig(namespace): > > sys.stdout.write("load additional sphinx-config: %s\n" % config_file) > > config = namespace.copy() > > config['__file__'] = config_file > > - execfile_(config_file, config) > > + with open(config_file, 'rb') as f: > > + code = compile(f.read(), fs_encoding, 'exec') > > + exec(code, config) > > del config['__file__'] > > namespace.update(config) > > else: > > -- > > 2.38.1 Thanks, Mauro
On 1/8/23 15:01, Mauro Carvalho Chehab wrote: > Em Sat, 7 Jan 2023 14:17:24 +0900 > Akira Yokosawa <akiyks@gmail.com> escreveu: > >> On Wed, 04 Jan 2023 13:45:35 -0700, Jonathan Corbet wrote: >>> Sphinx 6.0 removed the execfile_() function, which we use as part of the >>> configuration process. They *did* warn us... Just open-code the >>> functionality as is done in Sphinx itself. >>> >>> Tested (using SPHINX_CONF, since this code is only executed with an >>> alternative config file) on various Sphinx versions from 2.5 through 6.0. >>> >>> Reported-by: Martin Liška <mliska@suse.cz> >>> Signed-off-by: Jonathan Corbet <corbet@lwn.net> >> >> I have tested full builds of documentation with this change >> with Sphinx versions 1.7.9, 2.4.5, 3.4.3, 4.5.0, 5.3.0, and 6.0.0. >> >> Tested-by: Akira Yokosawa <akiyks@gmail.com> >> >> That said, Sphinx 6.0.0 needs much more time and memory than earlier >> versions. >> >> FYI, I needed to limit parallel slot to 2 (make -j2) on a 16GB machine. >> If you are lucky, -j3 and -j4 might succeed. -j5 or more ended up in >> OOM situations for me: >> >> Comparison of elapsed time and maxresident with -j2: >> >> ============== ============ =========== >> Sphinx version elapsed time maxresident >> ============== ============ =========== >> 5.3.0 10:16.81 937660 >> 6.0.0 17:29.07 5292392 >> ============== ============ =========== Hi. I can confirm the regression, I bisected Sphinx revision that caused that and filled an upstream issues: https://github.com/sphinx-doc/sphinx/issues/11116 Cheers, Martin > > From the changelogs: > https://www.sphinx-doc.org/en/master/changes.html > > It seems that 6.1 came with some performance optimizations, in particular: > > Cache doctrees in the build environment during the writing phase. > > Make all writing phase tasks support parallel execution. > > Cache doctrees between the reading and writing phases. > > It would be nice if you could also test and check elapsed time > there too, as I suspect that 6.0 will have a very short usage, as > 6.1 was released just a few days after it. > > Regards, > Mauro. > > > >> >> Thanks, Akira >> >>> --- >>> Documentation/sphinx/load_config.py | 6 ++++-- >>> 1 file changed, 4 insertions(+), 2 deletions(-) >>> >>> diff --git a/Documentation/sphinx/load_config.py b/Documentation/sphinx/load_config.py >>> index eeb394b39e2c..8b416bfd75ac 100644 >>> --- a/Documentation/sphinx/load_config.py >>> +++ b/Documentation/sphinx/load_config.py >>> @@ -3,7 +3,7 @@ >>> >>> import os >>> import sys >>> -from sphinx.util.pycompat import execfile_ >>> +from sphinx.util.osutil import fs_encoding >>> >>> # ------------------------------------------------------------------------------ >>> def loadConfig(namespace): >>> @@ -48,7 +48,9 @@ def loadConfig(namespace): >>> sys.stdout.write("load additional sphinx-config: %s\n" % config_file) >>> config = namespace.copy() >>> config['__file__'] = config_file >>> - execfile_(config_file, config) >>> + with open(config_file, 'rb') as f: >>> + code = compile(f.read(), fs_encoding, 'exec') >>> + exec(code, config) >>> del config['__file__'] >>> namespace.update(config) >>> else: >>> -- >>> 2.38.1 > > > > Thanks, > Mauro
On Mon, 9 Jan 2023 15:14:46 +0100, Martin Liška wrote: > On 1/8/23 15:01, Mauro Carvalho Chehab wrote: >> Em Sat, 7 Jan 2023 14:17:24 +0900 >> Akira Yokosawa <akiyks@gmail.com> escreveu: >> >>> On Wed, 04 Jan 2023 13:45:35 -0700, Jonathan Corbet wrote: >>>> Sphinx 6.0 removed the execfile_() function, which we use as part of the >>>> configuration process. They *did* warn us... Just open-code the >>>> functionality as is done in Sphinx itself. >>>> >>>> Tested (using SPHINX_CONF, since this code is only executed with an >>>> alternative config file) on various Sphinx versions from 2.5 through 6.0. >>>> >>>> Reported-by: Martin Liška <mliska@suse.cz> >>>> Signed-off-by: Jonathan Corbet <corbet@lwn.net> >>> >>> I have tested full builds of documentation with this change >>> with Sphinx versions 1.7.9, 2.4.5, 3.4.3, 4.5.0, 5.3.0, and 6.0.0. >>> >>> Tested-by: Akira Yokosawa <akiyks@gmail.com> >>> >>> That said, Sphinx 6.0.0 needs much more time and memory than earlier >>> versions. >>> >>> FYI, I needed to limit parallel slot to 2 (make -j2) on a 16GB machine. >>> If you are lucky, -j3 and -j4 might succeed. -j5 or more ended up in >>> OOM situations for me: >>> >>> Comparison of elapsed time and maxresident with -j2: >>> >>> ============== ============ =========== >>> Sphinx version elapsed time maxresident >>> ============== ============ =========== >>> 5.3.0 10:16.81 937660 >>> 6.0.0 17:29.07 5292392 >>> ============== ============ =========== > > Hi. > > I can confirm the regression, I bisected Sphinx revision that caused that > and filled an upstream issues: > https://github.com/sphinx-doc/sphinx/issues/11116 Thank you Martin for looking into this! > > Cheers, > Martin > >> >> From the changelogs: >> https://www.sphinx-doc.org/en/master/changes.html >> >> It seems that 6.1 came with some performance optimizations, in particular: >> >> Cache doctrees in the build environment during the writing phase. >> >> Make all writing phase tasks support parallel execution. >> >> Cache doctrees between the reading and writing phases. >> >> It would be nice if you could also test and check elapsed time >> there too, as I suspect that 6.0 will have a very short usage, as >> 6.1 was released just a few days after it. Here is a table comparing 5.3.0, 6.0.1 and 6.1.2 taken on the same machine (with 16GiB mem + 2GiB swap): ====== =================================== elapsed time ----------------------------------- Sphinx -j1 -j2 -j4 -j6 ====== ======== ======== ======== ======== 6.1.2 15:11.74 18:06.89 16:39.93 OOM 6.0.1 15:28.19 17:22.15 16:31.30 OOM 5.3.0 14:13.04 10:16.81 8:22.37 8:09.74 ====== ======== ======== ======== ======== Note: - The -j1 run needs an explicit option given to sphinx-build by: make SPHINXOPTS="-q -j1" htmldocs - Once an OOM happens, -j4 runs are likely end up in OOM. Looks like the non-parallel run is the cheapest option for mitigating the regression. Thanks, Akira >> >> Regards, >> Mauro. >> >> >> >>> >>> Thanks, Akira
On Tue, 10 Jan 2023 00:17:11 +0900, Akira Yokosawa wrote: > On Mon, 9 Jan 2023 15:14:46 +0100, Martin Liška wrote: >> Hi. >> >> I can confirm the regression, I bisected Sphinx revision that caused that >> and filled an upstream issues: >> https://github.com/sphinx-doc/sphinx/issues/11116 > > Thank you Martin for looking into this! Thanks to Martin's inputs on the github issue, Sphinx 6.1.3 has released and the issue is resolved for parallel builds. However, for non-parallel builds, the memory hog still remains. Again, this is a table comparing 5.3.0, 6.1.2, and 6.1.3. ====== =================================== =============================== elapsed time maxresident ----------------------------------- ------------------------------- Sphinx -j1 -j2 -j4 -j6 -j1 -j2 -j4 -j6 ====== ======== ======== ======== ======== ======= ======= ======= ======= 6.1.3 15:03.83 11:31.99 9:35.15 8:49.01 2949056 1059516 978232 967400 6.1.2 15:11.74 18:06.89 16:39.93 OOM 2961524 5548344 5255372 -- 5.3.0 14:13.04 10:16.81 8:22.37 8:09.74 711532 937660 846016 800340 ====== =================================== =============================== Note: - The -j1 run needs an explicit option given to sphinx-build: make SPHINXOPTS="-q -j1" htmldocs I naively assumed that the memory hog would be resolved all together, but that's not the case. Martin, could you report the remaining issue to upstream Sphinx? Thanks, Akira
On 1/12/23 00:13, Akira Yokosawa wrote: > On Tue, 10 Jan 2023 00:17:11 +0900, Akira Yokosawa wrote: >> On Mon, 9 Jan 2023 15:14:46 +0100, Martin Liška wrote: >>> Hi. >>> >>> I can confirm the regression, I bisected Sphinx revision that caused that >>> and filled an upstream issues: >>> https://github.com/sphinx-doc/sphinx/issues/11116 >> >> Thank you Martin for looking into this! > > Thanks to Martin's inputs on the github issue, Sphinx 6.1.3 has released > and the issue is resolved for parallel builds. You're welcome. > > However, for non-parallel builds, the memory hog still remains. > Again, this is a table comparing 5.3.0, 6.1.2, and 6.1.3. > > ====== =================================== =============================== > elapsed time maxresident > ----------------------------------- ------------------------------- > Sphinx -j1 -j2 -j4 -j6 -j1 -j2 -j4 -j6 > ====== ======== ======== ======== ======== ======= ======= ======= ======= > 6.1.3 15:03.83 11:31.99 9:35.15 8:49.01 2949056 1059516 978232 967400 > 6.1.2 15:11.74 18:06.89 16:39.93 OOM 2961524 5548344 5255372 -- > 5.3.0 14:13.04 10:16.81 8:22.37 8:09.74 711532 937660 846016 800340 > ====== =================================== =============================== I thank you for the nice numbers you provided. > > Note: > - The -j1 run needs an explicit option given to sphinx-build: > make SPHINXOPTS="-q -j1" htmldocs > > I naively assumed that the memory hog would be resolved all together, > but that's not the case. Yep, I would expect that same. > > Martin, could you report the remaining issue to upstream Sphinx? Sure: https://github.com/sphinx-doc/sphinx/issues/11124 Btw. do you have an Github account I can CC? Cheers, Martin > > Thanks, Akira > >
On Thu, 12 Jan 2023 17:22:42 +0100, Martin Liška wrote: > On 1/12/23 00:13, Akira Yokosawa wrote: >> On Tue, 10 Jan 2023 00:17:11 +0900, Akira Yokosawa wrote: >>> On Mon, 9 Jan 2023 15:14:46 +0100, Martin Liška wrote: >>>> Hi. >>>> >>>> I can confirm the regression, I bisected Sphinx revision that caused that >>>> and filled an upstream issues: >>>> https://github.com/sphinx-doc/sphinx/issues/11116 >>> >>> Thank you Martin for looking into this! >> >> Thanks to Martin's inputs on the github issue, Sphinx 6.1.3 has released >> and the issue is resolved for parallel builds. > > You're welcome. > >> >> However, for non-parallel builds, the memory hog still remains. >> Again, this is a table comparing 5.3.0, 6.1.2, and 6.1.3. >> >> ====== =================================== =============================== >> elapsed time maxresident >> ----------------------------------- ------------------------------- >> Sphinx -j1 -j2 -j4 -j6 -j1 -j2 -j4 -j6 >> ====== ======== ======== ======== ======== ======= ======= ======= ======= >> 6.1.3 15:03.83 11:31.99 9:35.15 8:49.01 2949056 1059516 978232 967400 >> 6.1.2 15:11.74 18:06.89 16:39.93 OOM 2961524 5548344 5255372 -- >> 5.3.0 14:13.04 10:16.81 8:22.37 8:09.74 711532 937660 846016 800340 >> ====== =================================== =============================== > > I thank you for the nice numbers you provided. You are welcome. > >> >> Note: >> - The -j1 run needs an explicit option given to sphinx-build: >> make SPHINXOPTS="-q -j1" htmldocs >> >> I naively assumed that the memory hog would be resolved all together, >> but that's not the case. > > Yep, I would expect that same. > >> >> Martin, could you report the remaining issue to upstream Sphinx? > > Sure: https://github.com/sphinx-doc/sphinx/issues/11124 Thanks! > > Btw. do you have an Github account I can CC? I have reacted with an emoji and subscribed to the issue. Thanks, Akira > > Cheers, > Martin > >> >> Thanks, Akira >> >> >
diff --git a/Documentation/sphinx/load_config.py b/Documentation/sphinx/load_config.py index eeb394b39e2c..8b416bfd75ac 100644 --- a/Documentation/sphinx/load_config.py +++ b/Documentation/sphinx/load_config.py @@ -3,7 +3,7 @@ import os import sys -from sphinx.util.pycompat import execfile_ +from sphinx.util.osutil import fs_encoding # ------------------------------------------------------------------------------ def loadConfig(namespace): @@ -48,7 +48,9 @@ def loadConfig(namespace): sys.stdout.write("load additional sphinx-config: %s\n" % config_file) config = namespace.copy() config['__file__'] = config_file - execfile_(config_file, config) + with open(config_file, 'rb') as f: + code = compile(f.read(), fs_encoding, 'exec') + exec(code, config) del config['__file__'] namespace.update(config) else: