Thu Aug 26 12:29:20 UTC 2021 I: starting to build skorch/bullseye/armhf on jenkins on '2021-08-26 12:28' Thu Aug 26 12:29:20 UTC 2021 I: The jenkins build log is/was available at https://jenkins.debian.net/userContent/reproducible/debian/build_service/armhf_8/11566/console.log Thu Aug 26 12:29:20 UTC 2021 I: Downloading source for bullseye/skorch=0.9.0-3 --2021-08-26 12:29:20-- http://cdn-fastly.deb.debian.org/debian/pool/main/s/skorch/skorch_0.9.0-3.dsc Connecting to 78.137.99.97:3128... connected. Proxy request sent, awaiting response... 200 OK Length: 2218 (2.2K) Saving to: ‘skorch_0.9.0-3.dsc’ 0K .. 100% 120M=0s 2021-08-26 12:29:20 (120 MB/s) - ‘skorch_0.9.0-3.dsc’ saved [2218/2218] Thu Aug 26 12:29:20 UTC 2021 I: skorch_0.9.0-3.dsc -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Format: 3.0 (quilt) Source: skorch Binary: python3-skorch Architecture: all Version: 0.9.0-3 Maintainer: Debian Deep Learning Team Uploaders: Mo Zhou Homepage: https://github.com/skorch-dev/skorch Standards-Version: 4.5.0 Vcs-Browser: https://salsa.debian.org/deeplearning-team/skorch Vcs-Git: https://salsa.debian.org/deeplearning-team/skorch.git Build-Depends: debhelper-compat (= 13), dh-python, python3-all, python3-coverage , python3-flaky , python3-numpy, python3-pandas , python3-pytest , python3-pytest-cov , python3-scipy, python3-setuptools, python3-sklearn, python3-tabulate, python3-torch (>= 1.3.1), python3-tqdm Package-List: python3-skorch deb science optional arch=all Checksums-Sha1: cfafeb562c05ce092985f4806fe6aefc3d41adb3 2876912 skorch_0.9.0.orig.tar.gz 7cc08a090e5de3d1a2e2325b922a0c11b0b04c57 3092 skorch_0.9.0-3.debian.tar.xz Checksums-Sha256: 535f41986e58d42335acb0c57d342657ad4d59619d39b98ee4802b42a6dc3fbd 2876912 skorch_0.9.0.orig.tar.gz 4c9c1474a49f04952a5cef8e8f66dcfdee6aa517c5275d9acac14dbbfaf4304f 3092 skorch_0.9.0-3.debian.tar.xz Files: 88d9ea6adf00b6c518dcbbb21e303219 2876912 skorch_0.9.0.orig.tar.gz 93edecea9dfaad242d19340495e00302 3092 skorch_0.9.0-3.debian.tar.xz -----BEGIN PGP SIGNATURE----- iQJFBAEBCgAvFiEEY4vHXsHlxYkGfjXeYmRes19oaooFAl+/ZAIRHGx1bWluQGRl Ymlhbi5vcmcACgkQYmRes19oaop5Og/+J2MhS1Exc5hO4c5b88cEWUebVeXf1Ix+ b0uLrsX4rNFoWo30CFsjUCvfMOJh8awy+Htp49u5APHLf7q0EdJx03xveDr+ezod UB7p7Xy191OPzecx558N5t6xalY3B/SQr9DHUmR14DfEpP/bIWCMBwD5p8jpEloP DLF7KFTWW/rE+ykEhxt40txUhboxU8Le9coOaGDfg+MZTdW14o/o+IpAhUa3l3aU bxVZiyehXDGBaNRd1NYksRQEwSRM+ZWEKUgGmcPb4v1jwh0eDv10REXHxXAapO0L JoxeGPd1iUYeamavyRjEdnZXhNcGN4TcqOUr82tgn9cLs2rW9e+73uzzzrUwrpMm 8pKFqTHNhcGY/iAsVjb40rqw5EgIDyLP2mM9xhLO5/qBkSoMLefLqsEONxJN176Y ZUX6JtyOS4Keb2Byu88o21rv/A5JhIxQFG+781zLUk40wPqsCxlPx2iYYz454ebI NX7Ejnibqism61VSz2IT33kx+dVVHy0hgKbLVQlDOlMKmGC/rnHNPb2FlUI0R9XT SctnykQ8IV2HnJ41BB4Kd81Yyoa9MOikjFhFOlE8bG+J9J+EJe6bF5URdfsyirSa 9MSJUh7TTbVqFy+laqQCYvmam03bP6ZQZGqjWftAFjGAYom1LV9Sm3BqCG+n1I1V 44j/AUs+3lg= =ikMS -----END PGP SIGNATURE----- Thu Aug 26 12:29:20 UTC 2021 I: Checking whether the package is not for us Thu Aug 26 12:29:20 UTC 2021 I: Starting 1st build on remote node virt64a-armhf-rb.debian.net. Thu Aug 26 12:29:20 UTC 2021 I: Preparing to do remote build '1' on virt64a-armhf-rb.debian.net. Thu Aug 26 12:42:28 UTC 2021 I: Deleting $TMPDIR on virt64a-armhf-rb.debian.net. I: pbuilder: network access will be disabled during build I: Current time: Thu Aug 26 00:29:29 -12 2021 I: pbuilder-time-stamp: 1629980969 I: Building the build Environment I: extracting base tarball [/var/cache/pbuilder/bullseye-reproducible-base.tgz] I: copying local configuration I: mounting /proc filesystem I: mounting /sys filesystem I: creating /{dev,run}/shm I: mounting /dev/pts filesystem I: redirecting /dev/ptmx to /dev/pts/ptmx I: policy-rc.d already exists I: Copying source file I: copying [skorch_0.9.0-3.dsc] I: copying [./skorch_0.9.0.orig.tar.gz] I: copying [./skorch_0.9.0-3.debian.tar.xz] I: Extracting source gpgv: unknown type of key resource 'trustedkeys.kbx' gpgv: keyblock resource '/tmp/dpkg-verify-sig.HHB4t_8N/trustedkeys.kbx': General error gpgv: Signature made Wed Nov 25 20:14:58 2020 -12 gpgv: using RSA key 638BC75EC1E5C589067E35DE62645EB35F686A8A gpgv: issuer "lumin@debian.org" gpgv: Can't check signature: No public key dpkg-source: warning: failed to verify signature on ./skorch_0.9.0-3.dsc dpkg-source: info: extracting skorch in skorch-0.9.0 dpkg-source: info: unpacking skorch_0.9.0.orig.tar.gz dpkg-source: info: unpacking skorch_0.9.0-3.debian.tar.xz dpkg-source: info: using patch list from debian/patches/series dpkg-source: info: applying skip-test.patch I: using fakeroot in build. I: Installing the build-deps I: user script /srv/workspace/pbuilder/1393/tmp/hooks/D02_print_environment starting I: set BUILDDIR='/build' BUILDUSERGECOS='first user,first room,first work-phone,first home-phone,first other' BUILDUSERNAME='pbuilder1' BUILD_ARCH='armhf' DEBIAN_FRONTEND='noninteractive' DEB_BUILD_OPTIONS='buildinfo=+all reproducible=+all,-fixfilepath parallel=3' DISTRIBUTION='' HOME='/root' HOST_ARCH='armhf' IFS=' ' INVOCATION_ID='dc66491f89ad4c9288bae94646a5ca04' LANG='C' LANGUAGE='en_US:en' LC_ALL='C' MAIL='/var/mail/root' OPTIND='1' PATH='/usr/sbin:/usr/bin:/sbin:/bin:/usr/games' PBCURRENTCOMMANDLINEOPERATION='build' PBUILDER_OPERATION='build' PBUILDER_PKGDATADIR='/usr/share/pbuilder' PBUILDER_PKGLIBDIR='/usr/lib/pbuilder' PBUILDER_SYSCONFDIR='/etc' PPID='1393' PS1='# ' PS2='> ' PS4='+ ' PWD='/' SHELL='/bin/bash' SHLVL='2' SUDO_COMMAND='/usr/bin/timeout -k 18.1h 18h /usr/bin/ionice -c 3 /usr/bin/nice /usr/sbin/pbuilder --build --configfile /srv/reproducible-results/rbuild-debian/tmp.la2qCii6Ou/pbuilderrc_PyQ0 --hookdir /etc/pbuilder/first-build-hooks --debbuildopts -b --basetgz /var/cache/pbuilder/bullseye-reproducible-base.tgz --buildresult /srv/reproducible-results/rbuild-debian/tmp.la2qCii6Ou/b1 --logfile b1/build.log skorch_0.9.0-3.dsc' SUDO_GID='114' SUDO_UID='108' SUDO_USER='jenkins' TERM='unknown' TZ='/usr/share/zoneinfo/Etc/GMT+12' USER='root' _='/usr/bin/systemd-run' http_proxy='http://10.0.0.15:8000/' I: uname -a Linux virt64a 5.10.0-8-arm64 #1 SMP Debian 5.10.46-4 (2021-08-03) aarch64 GNU/Linux I: ls -l /bin total 3580 -rwxr-xr-x 1 root root 816764 Aug 4 08:25 bash -rwxr-xr-x 3 root root 26052 Jul 20 2020 bunzip2 -rwxr-xr-x 3 root root 26052 Jul 20 2020 bzcat lrwxrwxrwx 1 root root 6 Jul 20 2020 bzcmp -> bzdiff -rwxr-xr-x 1 root root 2225 Jul 20 2020 bzdiff lrwxrwxrwx 1 root root 6 Jul 20 2020 bzegrep -> bzgrep -rwxr-xr-x 1 root root 4877 Sep 4 2019 bzexe lrwxrwxrwx 1 root root 6 Jul 20 2020 bzfgrep -> bzgrep -rwxr-xr-x 1 root root 3775 Jul 20 2020 bzgrep -rwxr-xr-x 3 root root 26052 Jul 20 2020 bzip2 -rwxr-xr-x 1 root root 9636 Jul 20 2020 bzip2recover lrwxrwxrwx 1 root root 6 Jul 20 2020 bzless -> bzmore -rwxr-xr-x 1 root root 1297 Jul 20 2020 bzmore -rwxr-xr-x 1 root root 26668 Sep 22 2020 cat -rwxr-xr-x 1 root root 43104 Sep 22 2020 chgrp -rwxr-xr-x 1 root root 38984 Sep 22 2020 chmod -rwxr-xr-x 1 root root 43112 Sep 22 2020 chown -rwxr-xr-x 1 root root 92616 Sep 22 2020 cp -rwxr-xr-x 1 root root 75524 Dec 10 2020 dash -rwxr-xr-x 1 root root 75880 Sep 22 2020 date -rwxr-xr-x 1 root root 55436 Sep 22 2020 dd -rwxr-xr-x 1 root root 59912 Sep 22 2020 df -rwxr-xr-x 1 root root 96764 Sep 22 2020 dir -rwxr-xr-x 1 root root 55012 Jul 28 07:09 dmesg lrwxrwxrwx 1 root root 8 Nov 6 2019 dnsdomainname -> hostname lrwxrwxrwx 1 root root 8 Nov 6 2019 domainname -> hostname -rwxr-xr-x 1 root root 22508 Sep 22 2020 echo -rwxr-xr-x 1 root root 28 Nov 9 2020 egrep -rwxr-xr-x 1 root root 22496 Sep 22 2020 false -rwxr-xr-x 1 root root 28 Nov 9 2020 fgrep -rwxr-xr-x 1 root root 47492 Jul 28 07:09 findmnt -rwsr-xr-x 1 root root 26076 Feb 26 04:12 fusermount -rwxr-xr-x 1 root root 124508 Nov 9 2020 grep -rwxr-xr-x 2 root root 2346 Mar 2 11:30 gunzip -rwxr-xr-x 1 root root 6376 Mar 2 11:30 gzexe -rwxr-xr-x 1 root root 64212 Mar 2 11:30 gzip -rwxr-xr-x 1 root root 13784 Nov 6 2019 hostname -rwxr-xr-x 1 root root 43180 Sep 22 2020 ln -rwxr-xr-x 1 root root 35068 Feb 7 2020 login -rwxr-xr-x 1 root root 96764 Sep 22 2020 ls -rwxr-xr-x 1 root root 99940 Jul 28 07:09 lsblk -rwxr-xr-x 1 root root 51408 Sep 22 2020 mkdir -rwxr-xr-x 1 root root 43184 Sep 22 2020 mknod -rwxr-xr-x 1 root root 30780 Sep 22 2020 mktemp -rwxr-xr-x 1 root root 34408 Jul 28 07:09 more -rwsr-xr-x 1 root root 34400 Jul 28 07:09 mount -rwxr-xr-x 1 root root 9824 Jul 28 07:09 mountpoint -rwxr-xr-x 1 root root 88524 Sep 22 2020 mv lrwxrwxrwx 1 root root 8 Nov 6 2019 nisdomainname -> hostname lrwxrwxrwx 1 root root 14 Apr 18 03:38 pidof -> /sbin/killall5 -rwxr-xr-x 1 root root 26652 Sep 22 2020 pwd lrwxrwxrwx 1 root root 4 Aug 4 08:25 rbash -> bash -rwxr-xr-x 1 root root 30740 Sep 22 2020 readlink -rwxr-xr-x 1 root root 43104 Sep 22 2020 rm -rwxr-xr-x 1 root root 30732 Sep 22 2020 rmdir -rwxr-xr-x 1 root root 14144 Sep 27 2020 run-parts -rwxr-xr-x 1 root root 76012 Dec 22 2018 sed lrwxrwxrwx 1 root root 4 Aug 20 21:25 sh -> dash -rwxr-xr-x 1 root root 22532 Sep 22 2020 sleep -rwxr-xr-x 1 root root 55360 Sep 22 2020 stty -rwsr-xr-x 1 root root 46704 Jul 28 07:09 su -rwxr-xr-x 1 root root 22532 Sep 22 2020 sync -rwxr-xr-x 1 root root 340872 Feb 16 2021 tar -rwxr-xr-x 1 root root 9808 Sep 27 2020 tempfile -rwxr-xr-x 1 root root 67696 Sep 22 2020 touch -rwxr-xr-x 1 root root 22496 Sep 22 2020 true -rwxr-xr-x 1 root root 9636 Feb 26 04:12 ulockmgr_server -rwsr-xr-x 1 root root 22108 Jul 28 07:09 umount -rwxr-xr-x 1 root root 22520 Sep 22 2020 uname -rwxr-xr-x 2 root root 2346 Mar 2 11:30 uncompress -rwxr-xr-x 1 root root 96764 Sep 22 2020 vdir -rwxr-xr-x 1 root root 38512 Jul 28 07:09 wdctl lrwxrwxrwx 1 root root 8 Nov 6 2019 ypdomainname -> hostname -rwxr-xr-x 1 root root 1984 Mar 2 11:30 zcat -rwxr-xr-x 1 root root 1678 Mar 2 11:30 zcmp -rwxr-xr-x 1 root root 5880 Mar 2 11:30 zdiff -rwxr-xr-x 1 root root 29 Mar 2 11:30 zegrep -rwxr-xr-x 1 root root 29 Mar 2 11:30 zfgrep -rwxr-xr-x 1 root root 2081 Mar 2 11:30 zforce -rwxr-xr-x 1 root root 7585 Mar 2 11:30 zgrep -rwxr-xr-x 1 root root 2206 Mar 2 11:30 zless -rwxr-xr-x 1 root root 1842 Mar 2 11:30 zmore -rwxr-xr-x 1 root root 4553 Mar 2 11:30 znew I: user script /srv/workspace/pbuilder/1393/tmp/hooks/D02_print_environment finished -> Attempting to satisfy build-dependencies -> Creating pbuilder-satisfydepends-dummy package Package: pbuilder-satisfydepends-dummy Version: 0.invalid.0 Architecture: armhf Maintainer: Debian Pbuilder Team Description: Dummy package to satisfy dependencies with aptitude - created by pbuilder This package was created automatically by pbuilder to satisfy the build-dependencies of the package being currently built. Depends: debhelper-compat (= 13), dh-python, python3-all, python3-coverage, python3-flaky, python3-numpy, python3-pandas, python3-pytest, python3-pytest-cov, python3-scipy, python3-setuptools, python3-sklearn, python3-tabulate, python3-torch (>= 1.3.1), python3-tqdm dpkg-deb: building package 'pbuilder-satisfydepends-dummy' in '/tmp/satisfydepends-aptitude/pbuilder-satisfydepends-dummy.deb'. Selecting previously unselected package pbuilder-satisfydepends-dummy. (Reading database ... 19398 files and directories currently installed.) Preparing to unpack .../pbuilder-satisfydepends-dummy.deb ... Unpacking pbuilder-satisfydepends-dummy (0.invalid.0) ... dpkg: pbuilder-satisfydepends-dummy: dependency problems, but configuring anyway as you requested: pbuilder-satisfydepends-dummy depends on debhelper-compat (= 13); however: Package debhelper-compat is not installed. pbuilder-satisfydepends-dummy depends on dh-python; however: Package dh-python is not installed. pbuilder-satisfydepends-dummy depends on python3-all; however: Package python3-all is not installed. pbuilder-satisfydepends-dummy depends on python3-coverage; however: Package python3-coverage is not installed. pbuilder-satisfydepends-dummy depends on python3-flaky; however: Package python3-flaky is not installed. pbuilder-satisfydepends-dummy depends on python3-numpy; however: Package python3-numpy is not installed. pbuilder-satisfydepends-dummy depends on python3-pandas; however: Package python3-pandas is not installed. pbuilder-satisfydepends-dummy depends on python3-pytest; however: Package python3-pytest is not installed. pbuilder-satisfydepends-dummy depends on python3-pytest-cov; however: Package python3-pytest-cov is not installed. pbuilder-satisfydepends-dummy depends on python3-scipy; however: Package python3-scipy is not installed. pbuilder-satisfydepends-dummy depends on python3-setuptools; however: Package python3-setuptools is not installed. pbuilder-satisfydepends-dummy depends on python3-sklearn; however: Package python3-sklearn is not installed. pbuilder-satisfydepends-dummy depends on python3-tabulate; however: Package python3-tabulate is not installed. pbuilder-satisfydepends-dummy depends on python3-torch (>= 1.3.1); however: Package python3-torch is not installed. pbuilder-satisfydepends-dummy depends on python3-tqdm; however: Package python3-tqdm is not installed. Setting up pbuilder-satisfydepends-dummy (0.invalid.0) ... Reading package lists... Building dependency tree... Reading state information... Initializing package states... Writing extended state information... Building tag database... pbuilder-satisfydepends-dummy is already installed at the requested version (0.invalid.0) pbuilder-satisfydepends-dummy is already installed at the requested version (0.invalid.0) The following NEW packages will be installed: adwaita-icon-theme{a} autoconf{a} automake{a} autopoint{a} autotools-dev{a} bsdextrautils{a} ca-certificates{a} dbus{a} dbus-user-session{a} dconf-gsettings-backend{a} dconf-service{a} debhelper{a} dh-autoreconf{a} dh-python{a} dh-strip-nondeterminism{a} dmsetup{a} dwz{a} file{a} fontconfig{a} fontconfig-config{a} fonts-dejavu-core{a} gdal-data{a} gettext{a} gettext-base{a} glib-networking{a} glib-networking-common{a} glib-networking-services{a} groff-base{a} gsettings-desktop-schemas{a} gtk-update-icon-cache{a} hicolor-icon-theme{a} intltool-debian{a} iso-codes{a} libaec0{a} libaom0{a} libapparmor1{a} libarchive-zip-perl{a} libarchive13{a} libargon2-1{a} libarmadillo10{a} libarpack2{a} libatk-bridge2.0-0{a} libatk1.0-0{a} libatk1.0-data{a} libatspi2.0-0{a} libavahi-client3{a} libavahi-common-data{a} libavahi-common3{a} libavcodec58{a} libavformat58{a} libavutil56{a} libblas3{a} libbluray2{a} libbrotli1{a} libbsd0{a} libcairo-gobject2{a} libcairo2{a} libcap2{a} libcap2-bin{a} libcfitsio9{a} libcharls2{a} libchromaprint1{a} libcodec2-0.9{a} libcolord2{a} libcpuinfo0{a} libcryptsetup12{a} libcups2{a} libcurl3-gnutls{a} libcurl4{a} libdap27{a} libdapclient6v5{a} libdatrie1{a} libdav1d4{a} libdbus-1-3{a} libdc1394-25{a} libdconf1{a} libde265-0{a} libdebhelper-perl{a} libdeflate0{a} libdevmapper1.02.1{a} libdrm-common{a} libdrm2{a} libdw1{a} libelf1{a} libepoxy0{a} libepsilon1{a} libexif12{a} libexpat1{a} libfile-stripnondeterminism-perl{a} libfmt7{a} libfontconfig1{a} libfreetype6{a} libfreexl1{a} libfribidi0{a} libfyba0{a} libgd3{a} libgdal28{a} libgdcm3.0{a} libgdk-pixbuf-2.0-0{a} libgdk-pixbuf2.0-common{a} libgeos-3.9.0{a} libgeos-c1v5{a} libgeotiff5{a} libgflags2.2{a} libgfortran5{a} libgif7{a} libglib2.0-0{a} libgme0{a} libgoogle-glog0v5{a} libgphoto2-6{a} libgphoto2-port12{a} libgraphite2-3{a} libgsm1{a} libgstreamer-plugins-base1.0-0{a} libgstreamer1.0-0{a} libgtk-3-0{a} libgtk-3-common{a} libharfbuzz0b{a} libhdf4-0-alt{a} libhdf5-103-1{a} libhdf5-hl-100{a} libheif1{a} libicu67{a} libilmbase25{a} libip4tc2{a} libjbig0{a} libjpeg62-turbo{a} libjs-jquery{a} libjs-jquery-hotkeys{a} libjs-jquery-isonscreen{a} libjs-jquery-metadata{a} libjs-jquery-tablesorter{a} libjs-jquery-throttle-debounce{a} libjson-c5{a} libjson-glib-1.0-0{a} libjson-glib-1.0-common{a} libkmlbase1{a} libkmldom1{a} libkmlengine1{a} libkmod2{a} liblapack3{a} liblbfgsb0{a} liblcms2-2{a} libldap-2.4-2{a} liblept5{a} libleveldb1d{a} liblmdb0{a} libltdl7{a} libmagic-mgc{a} libmagic1{a} libmariadb3{a} libmd0{a} libminizip1{a} libmp3lame0{a} libmpdec3{a} libmpg123-0{a} libncurses6{a} libnetcdf18{a} libnghttp2-14{a} libnorm1{a} libnspr4{a} libnss3{a} libodbc1{a} libogdi4.1{a} libogg0{a} libonnx1{a} libopencv-calib3d4.5{a} libopencv-contrib4.5{a} libopencv-core4.5{a} libopencv-dnn4.5{a} libopencv-features2d4.5{a} libopencv-flann4.5{a} libopencv-highgui4.5{a} libopencv-imgcodecs4.5{a} libopencv-imgproc4.5{a} libopencv-ml4.5{a} libopencv-objdetect4.5{a} libopencv-video4.5{a} libopencv-videoio4.5{a} libopenexr25{a} libopenjp2-7{a} libopenmpt0{a} libopus0{a} liborc-0.4-0{a} libpam-systemd{a} libpango-1.0-0{a} libpangocairo-1.0-0{a} libpangoft2-1.0-0{a} libpgm-5.3-0{a} libpipeline1{a} libpixman-1-0{a} libpng16-16{a} libpoppler102{a} libpq5{a} libprocps8{a} libproj19{a} libprotobuf23{a} libproxy1v5{a} libpsl5{a} libpython3-stdlib{a} libpython3.9-minimal{a} libpython3.9-stdlib{a} libqhull8.0{a} librabbitmq4{a} libraw1394-11{a} libreadline8{a} librest-0.7-0{a} librsvg2-2{a} librtmp1{a} librttopo1{a} libsasl2-2{a} libsasl2-modules-db{a} libshine3{a} libsigsegv2{a} libsleef3{a} libsnappy1v5{a} libsocket++1{a} libsodium23{a} libsoup-gnome2.4-1{a} libsoup2.4-1{a} libsoxr0{a} libspatialite7{a} libspeex1{a} libsrt1.4-gnutls{a} libssh-gcrypt-4{a} libssh2-1{a} libsub-override-perl{a} libsuperlu5{a} libswresample3{a} libswscale5{a} libsz2{a} libtbb2{a} libtesseract4{a} libthai-data{a} libthai0{a} libtheora0{a} libtiff5{a} libtool{a} libtorch1.7{a} libtwolame0{a} libuchardet0{a} libudfread0{a} libunwind8{a} liburiparser1{a} libusb-1.0-0{a} libva-drm2{a} libva-x11-2{a} libva2{a} libvdpau1{a} libvorbis0a{a} libvorbisenc2{a} libvorbisfile3{a} libvpx6{a} libwavpack1{a} libwayland-client0{a} libwayland-cursor0{a} libwayland-egl1{a} libwebp6{a} libwebpmux3{a} libx11-6{a} libx11-data{a} libx264-160{a} libx265-192{a} libxau6{a} libxcb-render0{a} libxcb-shm0{a} libxcb1{a} libxcomposite1{a} libxcursor1{a} libxdamage1{a} libxdmcp6{a} libxerces-c3.2{a} libxext6{a} libxfixes3{a} libxi6{a} libxinerama1{a} libxkbcommon0{a} libxml2{a} libxpm4{a} libxrandr2{a} libxrender1{a} libxvidcore4{a} libyaml-0-2{a} libzmq5{a} libzvbi-common{a} libzvbi0{a} m4{a} man-db{a} mariadb-common{a} media-types{a} mysql-common{a} ocl-icd-libopencl1{a} odbcinst{a} odbcinst1debian2{a} openssl{a} po-debconf{a} procps{a} proj-data{a} python3{a} python3-all{a} python3-attr{a} python3-certifi{a} python3-chardet{a} python3-cov-core{a} python3-coverage{a} python3-dateutil{a} python3-decorator{a} python3-distutils{a} python3-flaky{a} python3-future{a} python3-idna{a} python3-importlib-metadata{a} python3-iniconfig{a} python3-joblib{a} python3-lib2to3{a} python3-minimal{a} python3-more-itertools{a} python3-nose2{a} python3-numpy{a} python3-packaging{a} python3-pandas{a} python3-pandas-lib{a} python3-pkg-resources{a} python3-pluggy{a} python3-py{a} python3-pyparsing{a} python3-pytest{a} python3-pytest-cov{a} python3-requests{a} python3-scipy{a} python3-setuptools{a} python3-six{a} python3-sklearn{a} python3-sklearn-lib{a} python3-tabulate{a} python3-threadpoolctl{a} python3-toml{a} python3-torch{a} python3-tqdm{a} python3-typing-extensions{a} python3-tz{a} python3-urllib3{a} python3-yaml{a} python3-zipp{a} python3.9{a} python3.9-minimal{a} readline-common{a} sensible-utils{a} shared-mime-info{a} systemd{a} systemd-sysv{a} systemd-timesyncd{a} ucf{a} xkb-data{a} The following packages are RECOMMENDED but will NOT be installed: at-spi2-core curl gstreamer1.0-plugins-base javascript-common libaacs0 libarchive-cpio-perl libgdk-pixbuf2.0-bin libglib2.0-data libgphoto2-l10n libgpm2 libgtk-3-bin libldap-common libltdl-dev libmail-sendmail-perl libnss-systemd libpam-cap librsvg2-common libsasl2-modules libtorch-dev libvdpau-va-gl1 lynx mesa-va-drivers mesa-vdpau-drivers ninja-build poppler-data proj-bin psmisc publicsuffix pybind11-dev python3-bottleneck python3-bs4 python3-html5lib python3-jinja2 python3-lxml python3-matplotlib python3-nose python3-numexpr python3-odf python3-openpyxl python3-pil python3-psutil python3-pygments python3-simplejson python3-tables python3-xlwt va-driver-all vdpau-driver-all wget xdg-user-dirs 0 packages upgraded, 354 newly installed, 0 to remove and 0 not upgraded. Need to get 185 MB of archives. After unpacking 601 MB will be used. Writing extended state information... Get: 1 http://deb.debian.org/debian bullseye/main armhf libapparmor1 armhf 2.13.6-10 [94.5 kB] Get: 2 http://deb.debian.org/debian bullseye/main armhf libcap2 armhf 1:2.44-1 [21.2 kB] Get: 3 http://deb.debian.org/debian bullseye/main armhf libargon2-1 armhf 0~20171227-0.2 [20.4 kB] Get: 4 http://deb.debian.org/debian bullseye/main armhf dmsetup armhf 2:1.02.175-2.1 [92.1 kB] Get: 5 http://deb.debian.org/debian bullseye/main armhf libdevmapper1.02.1 armhf 2:1.02.175-2.1 [135 kB] Get: 6 http://deb.debian.org/debian bullseye/main armhf libjson-c5 armhf 0.15-2 [39.0 kB] Get: 7 http://deb.debian.org/debian bullseye/main armhf libcryptsetup12 armhf 2:2.3.5-1 [223 kB] Get: 8 http://deb.debian.org/debian bullseye/main armhf libip4tc2 armhf 1.8.7-1 [32.6 kB] Get: 9 http://deb.debian.org/debian bullseye/main armhf libkmod2 armhf 28-1 [48.5 kB] Get: 10 http://deb.debian.org/debian bullseye/main armhf systemd-timesyncd armhf 247.3-6 [130 kB] Get: 11 http://deb.debian.org/debian bullseye/main armhf systemd armhf 247.3-6 [4298 kB] Get: 12 http://deb.debian.org/debian bullseye/main armhf systemd-sysv armhf 247.3-6 [113 kB] Get: 13 http://deb.debian.org/debian bullseye/main armhf libdbus-1-3 armhf 1.12.20-2 [196 kB] Get: 14 http://deb.debian.org/debian bullseye/main armhf libexpat1 armhf 2.2.10-2 [76.3 kB] Get: 15 http://deb.debian.org/debian bullseye/main armhf dbus armhf 1.12.20-2 [222 kB] Get: 16 http://deb.debian.org/debian bullseye/main armhf bsdextrautils armhf 2.36.1-8 [138 kB] Get: 17 http://deb.debian.org/debian bullseye/main armhf libuchardet0 armhf 0.0.7-1 [65.0 kB] Get: 18 http://deb.debian.org/debian bullseye/main armhf groff-base armhf 1.22.4-6 [847 kB] Get: 19 http://deb.debian.org/debian bullseye/main armhf libpipeline1 armhf 1.5.3-1 [30.1 kB] Get: 20 http://deb.debian.org/debian bullseye/main armhf man-db armhf 2.9.4-2 [1319 kB] Get: 21 http://deb.debian.org/debian bullseye/main armhf libjs-jquery all 3.5.1+dfsg+~3.5.5-7 [315 kB] Get: 22 http://deb.debian.org/debian bullseye/main armhf libjs-jquery-hotkeys all 0~20130707+git2d51e3a9+dfsg-2.1 [11.5 kB] Get: 23 http://deb.debian.org/debian bullseye/main armhf libpython3.9-minimal armhf 3.9.2-1 [790 kB] Get: 24 http://deb.debian.org/debian bullseye/main armhf python3.9-minimal armhf 3.9.2-1 [1630 kB] Get: 25 http://deb.debian.org/debian bullseye/main armhf python3-minimal armhf 3.9.2-3 [38.2 kB] Get: 26 http://deb.debian.org/debian bullseye/main armhf media-types all 4.0.0 [30.3 kB] Get: 27 http://deb.debian.org/debian bullseye/main armhf libmpdec3 armhf 2.5.1-1 [74.9 kB] Get: 28 http://deb.debian.org/debian bullseye/main armhf readline-common all 8.1-1 [73.7 kB] Get: 29 http://deb.debian.org/debian bullseye/main armhf libreadline8 armhf 8.1-1 [147 kB] Get: 30 http://deb.debian.org/debian bullseye/main armhf libpython3.9-stdlib armhf 3.9.2-1 [1608 kB] Get: 31 http://deb.debian.org/debian bullseye/main armhf python3.9 armhf 3.9.2-1 [466 kB] Get: 32 http://deb.debian.org/debian bullseye/main armhf libpython3-stdlib armhf 3.9.2-3 [21.4 kB] Get: 33 http://deb.debian.org/debian bullseye/main armhf python3 armhf 3.9.2-3 [37.9 kB] Get: 34 http://deb.debian.org/debian bullseye/main armhf libncurses6 armhf 6.2+20201114-2 [80.5 kB] Get: 35 http://deb.debian.org/debian bullseye/main armhf libprocps8 armhf 2:3.3.17-5 [60.7 kB] Get: 36 http://deb.debian.org/debian bullseye/main armhf procps armhf 2:3.3.17-5 [492 kB] Get: 37 http://deb.debian.org/debian bullseye/main armhf sensible-utils all 0.0.14 [14.8 kB] Get: 38 http://deb.debian.org/debian bullseye/main armhf openssl armhf 1.1.1k-1 [826 kB] Get: 39 http://deb.debian.org/debian bullseye/main armhf ca-certificates all 20210119 [158 kB] Get: 40 http://deb.debian.org/debian bullseye/main armhf libmagic-mgc armhf 1:5.39-3 [273 kB] Get: 41 http://deb.debian.org/debian bullseye/main armhf libmagic1 armhf 1:5.39-3 [117 kB] Get: 42 http://deb.debian.org/debian bullseye/main armhf file armhf 1:5.39-3 [68.1 kB] Get: 43 http://deb.debian.org/debian bullseye/main armhf gettext-base armhf 0.21-4 [171 kB] Get: 44 http://deb.debian.org/debian bullseye/main armhf libpam-systemd armhf 247.3-6 [262 kB] Get: 45 http://deb.debian.org/debian bullseye/main armhf ucf all 3.0043 [74.0 kB] Get: 46 http://deb.debian.org/debian bullseye/main armhf hicolor-icon-theme all 0.17-2 [11.4 kB] Get: 47 http://deb.debian.org/debian bullseye/main armhf libgdk-pixbuf2.0-common all 2.42.2+dfsg-1 [320 kB] Get: 48 http://deb.debian.org/debian bullseye/main armhf libglib2.0-0 armhf 2.66.8-1 [1206 kB] Get: 49 http://deb.debian.org/debian bullseye/main armhf libicu67 armhf 67.1-7 [8319 kB] Get: 50 http://deb.debian.org/debian bullseye/main armhf libxml2 armhf 2.9.10+dfsg-6.7 [602 kB] Get: 51 http://deb.debian.org/debian bullseye/main armhf shared-mime-info armhf 2.0-1 [699 kB] Get: 52 http://deb.debian.org/debian bullseye/main armhf libjpeg62-turbo armhf 1:2.0.6-4 [123 kB] Get: 53 http://deb.debian.org/debian bullseye/main armhf libpng16-16 armhf 1.6.37-3 [277 kB] Get: 54 http://deb.debian.org/debian bullseye/main armhf libdeflate0 armhf 1.7-1 [43.1 kB] Get: 55 http://deb.debian.org/debian bullseye/main armhf libjbig0 armhf 2.1-3.1+b2 [28.4 kB] Get: 56 http://deb.debian.org/debian bullseye/main armhf libwebp6 armhf 0.6.1-2.1 [226 kB] Get: 57 http://deb.debian.org/debian bullseye/main armhf libtiff5 armhf 4.2.0-1 [271 kB] Get: 58 http://deb.debian.org/debian bullseye/main armhf libgdk-pixbuf-2.0-0 armhf 2.42.2+dfsg-1 [131 kB] Get: 59 http://deb.debian.org/debian bullseye/main armhf gtk-update-icon-cache armhf 3.24.24-4 [86.4 kB] Get: 60 http://deb.debian.org/debian bullseye/main armhf adwaita-icon-theme all 3.38.0-1 [10.9 MB] Get: 61 http://deb.debian.org/debian bullseye/main armhf libsigsegv2 armhf 2.13-1 [34.0 kB] Get: 62 http://deb.debian.org/debian bullseye/main armhf m4 armhf 1.4.18-5 [192 kB] Get: 63 http://deb.debian.org/debian bullseye/main armhf autoconf all 2.69-14 [313 kB] Get: 64 http://deb.debian.org/debian bullseye/main armhf autotools-dev all 20180224.1+nmu1 [77.1 kB] Get: 65 http://deb.debian.org/debian bullseye/main armhf automake all 1:1.16.3-2 [814 kB] Get: 66 http://deb.debian.org/debian bullseye/main armhf autopoint all 0.21-4 [510 kB] Get: 67 http://deb.debian.org/debian bullseye/main armhf dbus-user-session armhf 1.12.20-2 [96.2 kB] Get: 68 http://deb.debian.org/debian bullseye/main armhf libdconf1 armhf 0.38.0-2 [39.4 kB] Get: 69 http://deb.debian.org/debian bullseye/main armhf dconf-service armhf 0.38.0-2 [33.6 kB] Get: 70 http://deb.debian.org/debian bullseye/main armhf dconf-gsettings-backend armhf 0.38.0-2 [27.0 kB] Get: 71 http://deb.debian.org/debian bullseye/main armhf libdebhelper-perl all 13.3.4 [189 kB] Get: 72 http://deb.debian.org/debian bullseye/main armhf libtool all 2.4.6-15 [513 kB] Get: 73 http://deb.debian.org/debian bullseye/main armhf dh-autoreconf all 20 [17.1 kB] Get: 74 http://deb.debian.org/debian bullseye/main armhf libarchive-zip-perl all 1.68-1 [104 kB] Get: 75 http://deb.debian.org/debian bullseye/main armhf libsub-override-perl all 0.09-2 [10.2 kB] Get: 76 http://deb.debian.org/debian bullseye/main armhf libfile-stripnondeterminism-perl all 1.12.0-1 [26.3 kB] Get: 77 http://deb.debian.org/debian bullseye/main armhf dh-strip-nondeterminism all 1.12.0-1 [15.4 kB] Get: 78 http://deb.debian.org/debian bullseye/main armhf libelf1 armhf 0.183-1 [161 kB] Get: 79 http://deb.debian.org/debian bullseye/main armhf dwz armhf 0.13+20210201-1 [179 kB] Get: 80 http://deb.debian.org/debian bullseye/main armhf gettext armhf 0.21-4 [1243 kB] Get: 81 http://deb.debian.org/debian bullseye/main armhf intltool-debian all 0.35.0+20060710.5 [26.8 kB] Get: 82 http://deb.debian.org/debian bullseye/main armhf po-debconf all 1.0.21+nmu1 [248 kB] Get: 83 http://deb.debian.org/debian bullseye/main armhf debhelper all 13.3.4 [1049 kB] Get: 84 http://deb.debian.org/debian bullseye/main armhf python3-lib2to3 all 3.9.2-1 [77.8 kB] Get: 85 http://deb.debian.org/debian bullseye/main armhf python3-distutils all 3.9.2-1 [143 kB] Get: 86 http://deb.debian.org/debian bullseye/main armhf dh-python all 4.20201102+nmu1 [99.4 kB] Get: 87 http://deb.debian.org/debian bullseye/main armhf libbrotli1 armhf 1.0.9-2+b2 [262 kB] Get: 88 http://deb.debian.org/debian bullseye/main armhf libfreetype6 armhf 2.10.4+dfsg-1 [357 kB] Get: 89 http://deb.debian.org/debian bullseye/main armhf fonts-dejavu-core all 2.37-2 [1069 kB] Get: 90 http://deb.debian.org/debian bullseye/main armhf fontconfig-config all 2.13.1-4.2 [281 kB] Get: 91 http://deb.debian.org/debian bullseye/main armhf libfontconfig1 armhf 2.13.1-4.2 [329 kB] Get: 92 http://deb.debian.org/debian bullseye/main armhf fontconfig armhf 2.13.1-4.2 [415 kB] Get: 93 http://deb.debian.org/debian bullseye/main armhf gdal-data all 3.2.2+dfsg-2 [462 kB] Get: 94 http://deb.debian.org/debian bullseye/main armhf libproxy1v5 armhf 0.4.17-1 [54.4 kB] Get: 95 http://deb.debian.org/debian bullseye/main armhf glib-networking-common all 2.66.0-2 [68.1 kB] Get: 96 http://deb.debian.org/debian bullseye/main armhf glib-networking-services armhf 2.66.0-2 [16.9 kB] Get: 97 http://deb.debian.org/debian bullseye/main armhf gsettings-desktop-schemas all 3.38.0-2 [588 kB] Get: 98 http://deb.debian.org/debian bullseye/main armhf glib-networking armhf 2.66.0-2 [61.4 kB] Get: 99 http://deb.debian.org/debian bullseye/main armhf iso-codes all 4.6.0-1 [2824 kB] Get: 100 http://deb.debian.org/debian bullseye/main armhf libaec0 armhf 1.0.4-1 [20.4 kB] Get: 101 http://deb.debian.org/debian bullseye/main armhf libaom0 armhf 1.0.0.errata1-3 [821 kB] Get: 102 http://deb.debian.org/debian bullseye/main armhf libarchive13 armhf 3.4.3-2+b1 [304 kB] Get: 103 http://deb.debian.org/debian bullseye/main armhf libblas3 armhf 3.9.0-3 [109 kB] Get: 104 http://deb.debian.org/debian bullseye/main armhf libgfortran5 armhf 10.2.1-6 [237 kB] Get: 105 http://deb.debian.org/debian bullseye/main armhf liblapack3 armhf 3.9.0-3 [1651 kB] Get: 106 http://deb.debian.org/debian bullseye/main armhf libarpack2 armhf 3.8.0-1 [87.9 kB] Get: 107 http://deb.debian.org/debian bullseye/main armhf libsuperlu5 armhf 5.2.2+dfsg1-2 [136 kB] Get: 108 http://deb.debian.org/debian bullseye/main armhf libarmadillo10 armhf 1:10.1.2+dfsg-6 [95.9 kB] Get: 109 http://deb.debian.org/debian bullseye/main armhf libatk1.0-data all 2.36.0-2 [149 kB] Get: 110 http://deb.debian.org/debian bullseye/main armhf libatk1.0-0 armhf 2.36.0-2 [45.2 kB] Get: 111 http://deb.debian.org/debian bullseye/main armhf libxau6 armhf 1:1.0.9-1 [19.0 kB] Get: 112 http://deb.debian.org/debian bullseye/main armhf libmd0 armhf 1.0.3-3 [27.4 kB] Get: 113 http://deb.debian.org/debian bullseye/main armhf libbsd0 armhf 0.11.3-1 [103 kB] Get: 114 http://deb.debian.org/debian bullseye/main armhf libxdmcp6 armhf 1:1.1.2-3 [24.9 kB] Get: 115 http://deb.debian.org/debian bullseye/main armhf libxcb1 armhf 1.14-3 [136 kB] Get: 116 http://deb.debian.org/debian bullseye/main armhf libx11-data all 2:1.7.2-1 [311 kB] Get: 117 http://deb.debian.org/debian bullseye/main armhf libx11-6 armhf 2:1.7.2-1 [713 kB] Get: 118 http://deb.debian.org/debian bullseye/main armhf libatspi2.0-0 armhf 2.38.0-4 [63.1 kB] Get: 119 http://deb.debian.org/debian bullseye/main armhf libatk-bridge2.0-0 armhf 2.38.0-1 [56.9 kB] Get: 120 http://deb.debian.org/debian bullseye/main armhf libavahi-common-data armhf 0.8-5 [123 kB] Get: 121 http://deb.debian.org/debian bullseye/main armhf libavahi-common3 armhf 0.8-5 [55.1 kB] Get: 122 http://deb.debian.org/debian bullseye/main armhf libavahi-client3 armhf 0.8-5 [58.5 kB] Get: 123 http://deb.debian.org/debian bullseye/main armhf libdrm-common all 2.4.104-1 [14.9 kB] Get: 124 http://deb.debian.org/debian bullseye/main armhf libdrm2 armhf 2.4.104-1 [37.7 kB] Get: 125 http://deb.debian.org/debian bullseye/main armhf libva2 armhf 2.10.0-1 [62.6 kB] Get: 126 http://deb.debian.org/debian bullseye/main armhf libva-drm2 armhf 2.10.0-1 [18.4 kB] Get: 127 http://deb.debian.org/debian bullseye/main armhf libxext6 armhf 2:1.3.3-1.1 [47.8 kB] Get: 128 http://deb.debian.org/debian bullseye/main armhf libxfixes3 armhf 1:5.0.3-2 [20.6 kB] Get: 129 http://deb.debian.org/debian bullseye/main armhf libva-x11-2 armhf 2.10.0-1 [21.8 kB] Get: 130 http://deb.debian.org/debian bullseye/main armhf libvdpau1 armhf 1.4-3 [27.4 kB] Get: 131 http://deb.debian.org/debian bullseye/main armhf ocl-icd-libopencl1 armhf 2.2.14-2 [39.7 kB] Get: 132 http://deb.debian.org/debian bullseye/main armhf libavutil56 armhf 7:4.3.2-0+deb11u2 [300 kB] Get: 133 http://deb.debian.org/debian bullseye/main armhf libpixman-1-0 armhf 0.40.0-1 [466 kB] Get: 134 http://deb.debian.org/debian bullseye/main armhf libxcb-render0 armhf 1.14-3 [110 kB] Get: 135 http://deb.debian.org/debian bullseye/main armhf libxcb-shm0 armhf 1.14-3 [101 kB] Get: 136 http://deb.debian.org/debian bullseye/main armhf libxrender1 armhf 1:0.9.10-1 [29.9 kB] Get: 137 http://deb.debian.org/debian bullseye/main armhf libcairo2 armhf 1.16.0-5 [615 kB] Get: 138 http://deb.debian.org/debian bullseye/main armhf libcodec2-0.9 armhf 0.9.2-4 [7867 kB] Get: 139 http://deb.debian.org/debian bullseye/main armhf libdav1d4 armhf 0.7.1-3 [228 kB] Get: 140 http://deb.debian.org/debian bullseye/main armhf libgsm1 armhf 1.0.18-2 [26.5 kB] Get: 141 http://deb.debian.org/debian bullseye/main armhf libmp3lame0 armhf 3.100-3 [350 kB] Get: 142 http://deb.debian.org/debian bullseye/main armhf libopenjp2-7 armhf 2.4.0-3 [154 kB] Get: 143 http://deb.debian.org/debian bullseye/main armhf libopus0 armhf 1.3.1-0.1 [166 kB] Get: 144 http://deb.debian.org/debian bullseye/main armhf libcairo-gobject2 armhf 1.16.0-5 [125 kB] Get: 145 http://deb.debian.org/debian bullseye/main armhf libfribidi0 armhf 1.0.8-2 [62.9 kB] Get: 146 http://deb.debian.org/debian bullseye/main armhf libgraphite2-3 armhf 1.3.14-1 [70.5 kB] Get: 147 http://deb.debian.org/debian bullseye/main armhf libharfbuzz0b armhf 2.7.4-1 [1427 kB] Get: 148 http://deb.debian.org/debian bullseye/main armhf libthai-data all 0.1.28-3 [170 kB] Get: 149 http://deb.debian.org/debian bullseye/main armhf libdatrie1 armhf 0.2.13-1 [39.4 kB] Get: 150 http://deb.debian.org/debian bullseye/main armhf libthai0 armhf 0.1.28-3 [50.9 kB] Get: 151 http://deb.debian.org/debian bullseye/main armhf libpango-1.0-0 armhf 1.46.2-3 [173 kB] Get: 152 http://deb.debian.org/debian bullseye/main armhf libpangoft2-1.0-0 armhf 1.46.2-3 [56.1 kB] Get: 153 http://deb.debian.org/debian bullseye/main armhf libpangocairo-1.0-0 armhf 1.46.2-3 [46.8 kB] Get: 154 http://deb.debian.org/debian bullseye/main armhf librsvg2-2 armhf 2.50.3+dfsg-1 [2042 kB] Get: 155 http://deb.debian.org/debian bullseye/main armhf libshine3 armhf 3.1.1-2 [22.0 kB] Get: 156 http://deb.debian.org/debian bullseye/main armhf libsnappy1v5 armhf 1.1.8-1 [16.5 kB] Get: 157 http://deb.debian.org/debian bullseye/main armhf libspeex1 armhf 1.2~rc1.2-1.1 [51.6 kB] Get: 158 http://deb.debian.org/debian bullseye/main armhf libsoxr0 armhf 0.1.3-4 [67.0 kB] Get: 159 http://deb.debian.org/debian bullseye/main armhf libswresample3 armhf 7:4.3.2-0+deb11u2 [95.5 kB] Get: 160 http://deb.debian.org/debian bullseye/main armhf libogg0 armhf 1.3.4-0.1 [24.6 kB] Get: 161 http://deb.debian.org/debian bullseye/main armhf libtheora0 armhf 1.1.1+dfsg.1-15 [147 kB] Get: 162 http://deb.debian.org/debian bullseye/main armhf libtwolame0 armhf 0.4.0-2 [47.1 kB] Get: 163 http://deb.debian.org/debian bullseye/main armhf libvorbis0a armhf 1.3.7-1 [83.0 kB] Get: 164 http://deb.debian.org/debian bullseye/main armhf libvorbisenc2 armhf 1.3.7-1 [74.4 kB] Get: 165 http://deb.debian.org/debian bullseye/main armhf libvpx6 armhf 1.9.0-1 [1038 kB] Get: 166 http://deb.debian.org/debian bullseye/main armhf libwavpack1 armhf 5.4.0-1 [75.7 kB] Get: 167 http://deb.debian.org/debian bullseye/main armhf libwebpmux3 armhf 0.6.1-2.1 [94.2 kB] Get: 168 http://deb.debian.org/debian bullseye/main armhf libx264-160 armhf 2:0.160.3011+gitcde9a93-2.1 [426 kB] Get: 169 http://deb.debian.org/debian bullseye/main armhf libx265-192 armhf 3.4-2 [583 kB] Get: 170 http://deb.debian.org/debian bullseye/main armhf libxvidcore4 armhf 2:1.3.7-1 [203 kB] Get: 171 http://deb.debian.org/debian bullseye/main armhf libzvbi-common all 0.2.35-18 [64.6 kB] Get: 172 http://deb.debian.org/debian bullseye/main armhf libzvbi0 armhf 0.2.35-18 [245 kB] Get: 173 http://deb.debian.org/debian bullseye/main armhf libavcodec58 armhf 7:4.3.2-0+deb11u2 [4496 kB] Get: 174 http://deb.debian.org/debian bullseye/main armhf libudfread0 armhf 1.1.1-1 [14.3 kB] Get: 175 http://deb.debian.org/debian bullseye/main armhf libbluray2 armhf 1:1.2.1-4 [124 kB] Get: 176 http://deb.debian.org/debian bullseye/main armhf libchromaprint1 armhf 1.5.0-2 [33.4 kB] Get: 177 http://deb.debian.org/debian bullseye/main armhf libgme0 armhf 0.6.3-2 [108 kB] Get: 178 http://deb.debian.org/debian bullseye/main armhf libmpg123-0 armhf 1.26.4-1 [120 kB] Get: 179 http://deb.debian.org/debian bullseye/main armhf libvorbisfile3 armhf 1.3.7-1 [25.4 kB] Get: 180 http://deb.debian.org/debian bullseye/main armhf libopenmpt0 armhf 0.4.11-1 [544 kB] Get: 181 http://deb.debian.org/debian bullseye/main armhf librabbitmq4 armhf 0.10.0-1 [36.8 kB] Get: 182 http://deb.debian.org/debian bullseye/main armhf libsrt1.4-gnutls armhf 1.4.2-1.3 [240 kB] Get: 183 http://deb.debian.org/debian bullseye/main armhf libssh-gcrypt-4 armhf 0.9.5-1 [192 kB] Get: 184 http://deb.debian.org/debian bullseye/main armhf libnorm1 armhf 1.5.9+dfsg-2 [185 kB] Get: 185 http://deb.debian.org/debian bullseye/main armhf libpgm-5.3-0 armhf 5.3.128~dfsg-2 [156 kB] Get: 186 http://deb.debian.org/debian bullseye/main armhf libsodium23 armhf 1.0.18-1 [147 kB] Get: 187 http://deb.debian.org/debian bullseye/main armhf libzmq5 armhf 4.3.4-1 [237 kB] Get: 188 http://deb.debian.org/debian bullseye/main armhf libavformat58 armhf 7:4.3.2-0+deb11u2 [974 kB] Get: 189 http://deb.debian.org/debian bullseye/main armhf libcap2-bin armhf 1:2.44-1 [31.7 kB] Get: 190 http://deb.debian.org/debian bullseye/main armhf libsasl2-modules-db armhf 2.1.27+dfsg-2.1 [67.6 kB] Get: 191 http://deb.debian.org/debian bullseye/main armhf libsasl2-2 armhf 2.1.27+dfsg-2.1 [99.1 kB] Get: 192 http://deb.debian.org/debian bullseye/main armhf libldap-2.4-2 armhf 2.4.57+dfsg-3 [210 kB] Get: 193 http://deb.debian.org/debian bullseye/main armhf libnghttp2-14 armhf 1.43.0-1 [65.6 kB] Get: 194 http://deb.debian.org/debian bullseye/main armhf libpsl5 armhf 0.21.0-1.2 [56.1 kB] Get: 195 http://deb.debian.org/debian bullseye/main armhf librtmp1 armhf 2.4+20151223.gitfa8646d.1-2+b2 [55.2 kB] Get: 196 http://deb.debian.org/debian bullseye/main armhf libssh2-1 armhf 1.9.0-2 [143 kB] Get: 197 http://deb.debian.org/debian bullseye/main armhf libcurl3-gnutls armhf 7.74.0-1.3+b1 [306 kB] Get: 198 http://deb.debian.org/debian bullseye/main armhf libcfitsio9 armhf 3.490-3 [504 kB] Get: 199 http://deb.debian.org/debian bullseye/main armhf libcharls2 armhf 2.2.0+dfsg-2 [70.1 kB] Get: 200 http://deb.debian.org/debian bullseye/main armhf liblcms2-2 armhf 2.12~rc1-2 [123 kB] Get: 201 http://deb.debian.org/debian bullseye/main armhf libcolord2 armhf 1.4.5-3 [126 kB] Get: 202 http://deb.debian.org/debian bullseye/main armhf libcpuinfo0 armhf 0.0~git20200612.63b2545-2 [25.1 kB] Get: 203 http://deb.debian.org/debian bullseye/main armhf libcups2 armhf 2.3.3op2-3+deb11u1 [317 kB] Get: 204 http://deb.debian.org/debian bullseye/main armhf libcurl4 armhf 7.74.0-1.3+b1 [310 kB] Get: 205 http://deb.debian.org/debian bullseye/main armhf libdap27 armhf 3.20.7-6 [505 kB] Get: 206 http://deb.debian.org/debian bullseye/main armhf libdapclient6v5 armhf 3.20.7-6 [200 kB] Get: 207 http://deb.debian.org/debian bullseye/main armhf libraw1394-11 armhf 2.1.2-2 [37.7 kB] Get: 208 http://deb.debian.org/debian bullseye/main armhf libusb-1.0-0 armhf 2:1.0.24-3 [54.6 kB] Get: 209 http://deb.debian.org/debian bullseye/main armhf libdc1394-25 armhf 2.2.6-3 [100 kB] Get: 210 http://deb.debian.org/debian bullseye/main armhf libde265-0 armhf 1.0.8-1 [189 kB] Get: 211 http://deb.debian.org/debian bullseye/main armhf libdw1 armhf 0.183-1 [216 kB] Get: 212 http://deb.debian.org/debian bullseye/main armhf libepoxy0 armhf 1.5.5-1 [170 kB] Get: 213 http://deb.debian.org/debian bullseye/main armhf libepsilon1 armhf 0.9.2+dfsg-5 [36.5 kB] Get: 214 http://deb.debian.org/debian bullseye/main armhf libexif12 armhf 0.6.22-3 [364 kB] Get: 215 http://deb.debian.org/debian bullseye/main armhf libfmt7 armhf 7.1.3+ds1-5 [102 kB] Get: 216 http://deb.debian.org/debian bullseye/main armhf libfreexl1 armhf 1.0.6-1 [31.4 kB] Get: 217 http://deb.debian.org/debian bullseye/main armhf libfyba0 armhf 4.1.1-7 [101 kB] Get: 218 http://deb.debian.org/debian bullseye/main armhf libxpm4 armhf 1:3.5.12-1 [44.0 kB] Get: 219 http://deb.debian.org/debian bullseye/main armhf libgd3 armhf 2.3.0-2 [119 kB] Get: 220 http://deb.debian.org/debian bullseye/main armhf libgeos-3.9.0 armhf 3.9.0-1 [841 kB] Get: 221 http://deb.debian.org/debian bullseye/main armhf libgeos-c1v5 armhf 3.9.0-1 [368 kB] Get: 222 http://deb.debian.org/debian bullseye/main armhf proj-data all 7.2.1-1 [7940 kB] Get: 223 http://deb.debian.org/debian bullseye/main armhf libproj19 armhf 7.2.1-1 [970 kB] Get: 224 http://deb.debian.org/debian bullseye/main armhf libgeotiff5 armhf 1.6.0-1 [62.3 kB] Get: 225 http://deb.debian.org/debian bullseye/main armhf libgif7 armhf 5.1.9-2 [42.7 kB] Get: 226 http://deb.debian.org/debian bullseye/main armhf libhdf4-0-alt armhf 4.2.15-3 [244 kB] Get: 227 http://deb.debian.org/debian bullseye/main armhf libsz2 armhf 1.0.4-1 [6616 B] Get: 228 http://deb.debian.org/debian bullseye/main armhf libhdf5-103-1 armhf 1.10.6+repack-4 [1178 kB] Get: 229 http://deb.debian.org/debian bullseye/main armhf libheif1 armhf 1.11.0-1 [166 kB] Get: 230 http://deb.debian.org/debian bullseye/main armhf libminizip1 armhf 1.1-8+b1 [19.0 kB] Get: 231 http://deb.debian.org/debian bullseye/main armhf liburiparser1 armhf 0.9.4+dfsg-1 [36.7 kB] Get: 232 http://deb.debian.org/debian bullseye/main armhf libkmlbase1 armhf 1.3.0-9 [44.4 kB] Get: 233 http://deb.debian.org/debian bullseye/main armhf libkmldom1 armhf 1.3.0-9 [131 kB] Get: 234 http://deb.debian.org/debian bullseye/main armhf libkmlengine1 armhf 1.3.0-9 [67.0 kB] Get: 235 http://deb.debian.org/debian bullseye/main armhf mysql-common all 5.8+1.0.7 [7464 B] Get: 236 http://deb.debian.org/debian bullseye/main armhf mariadb-common all 1:10.5.11-1 [36.3 kB] Get: 237 http://deb.debian.org/debian bullseye/main armhf libmariadb3 armhf 1:10.5.11-1 [161 kB] Get: 238 http://deb.debian.org/debian bullseye/main armhf libhdf5-hl-100 armhf 1.10.6+repack-4 [80.5 kB] Get: 239 http://deb.debian.org/debian bullseye/main armhf libnetcdf18 armhf 1:4.7.4-1 [355 kB] Get: 240 http://deb.debian.org/debian bullseye/main armhf libltdl7 armhf 2.4.6-15 [388 kB] Get: 241 http://deb.debian.org/debian bullseye/main armhf libodbc1 armhf 2.3.6-0.1+b1 [191 kB] Get: 242 http://deb.debian.org/debian bullseye/main armhf libogdi4.1 armhf 4.1.0+ds-5 [176 kB] Get: 243 http://deb.debian.org/debian bullseye/main armhf libnspr4 armhf 2:4.29-1 [90.7 kB] Get: 244 http://deb.debian.org/debian bullseye/main armhf libnss3 armhf 2:3.61-1 [1114 kB] Get: 245 http://deb.debian.org/debian bullseye/main armhf libpoppler102 armhf 20.09.0-3.1 [1541 kB] Get: 246 http://deb.debian.org/debian bullseye/main armhf libpq5 armhf 13.3-1 [161 kB] Get: 247 http://deb.debian.org/debian bullseye/main armhf libqhull8.0 armhf 2020.2-3 [225 kB] Get: 248 http://deb.debian.org/debian bullseye/main armhf librttopo1 armhf 1.1.0-2 [148 kB] Get: 249 http://deb.debian.org/debian bullseye/main armhf libspatialite7 armhf 5.0.1-2 [1719 kB] Get: 250 http://deb.debian.org/debian bullseye/main armhf libxerces-c3.2 armhf 3.2.3+debian-3 [731 kB] Get: 251 http://deb.debian.org/debian bullseye/main armhf odbcinst armhf 2.3.6-0.1+b1 [47.9 kB] Get: 252 http://deb.debian.org/debian bullseye/main armhf odbcinst1debian2 armhf 2.3.6-0.1+b1 [71.3 kB] Get: 253 http://deb.debian.org/debian bullseye/main armhf libgdal28 armhf 3.2.2+dfsg-2 [6239 kB] Get: 254 http://deb.debian.org/debian bullseye/main armhf libsocket++1 armhf 1.12.13-11 [68.3 kB] Get: 255 http://deb.debian.org/debian bullseye/main armhf libgdcm3.0 armhf 3.0.8-2 [1630 kB] Get: 256 http://deb.debian.org/debian bullseye/main armhf libgflags2.2 armhf 2.2.2-2 [66.4 kB] Get: 257 http://deb.debian.org/debian bullseye/main armhf libgoogle-glog0v5 armhf 0.4.0-4 [50.1 kB] Get: 258 http://deb.debian.org/debian bullseye/main armhf libgphoto2-port12 armhf 2.5.27-1 [145 kB] Get: 259 http://deb.debian.org/debian bullseye/main armhf libgphoto2-6 armhf 2.5.27-1 [695 kB] Get: 260 http://deb.debian.org/debian bullseye/main armhf libunwind8 armhf 1.3.2-2 [48.0 kB] Get: 261 http://deb.debian.org/debian bullseye/main armhf libgstreamer1.0-0 armhf 1.18.4-2.1 [2152 kB] Get: 262 http://deb.debian.org/debian bullseye/main armhf liborc-0.4-0 armhf 1:0.4.32-1 [157 kB] Get: 263 http://deb.debian.org/debian bullseye/main armhf libgstreamer-plugins-base1.0-0 armhf 1.18.4-2 [2076 kB] Get: 264 http://deb.debian.org/debian bullseye/main armhf libjson-glib-1.0-common all 1.6.2-1 [56.9 kB] Get: 265 http://deb.debian.org/debian bullseye/main armhf libjson-glib-1.0-0 armhf 1.6.2-1 [58.0 kB] Get: 266 http://deb.debian.org/debian bullseye/main armhf libsoup2.4-1 armhf 2.72.0-2 [246 kB] Get: 267 http://deb.debian.org/debian bullseye/main armhf libsoup-gnome2.4-1 armhf 2.72.0-2 [21.7 kB] Get: 268 http://deb.debian.org/debian bullseye/main armhf librest-0.7-0 armhf 0.8.1-1.1 [28.7 kB] Get: 269 http://deb.debian.org/debian bullseye/main armhf libwayland-client0 armhf 1.18.0-2~exp1.1 [22.1 kB] Get: 270 http://deb.debian.org/debian bullseye/main armhf libwayland-cursor0 armhf 1.18.0-2~exp1.1 [13.5 kB] Get: 271 http://deb.debian.org/debian bullseye/main armhf libwayland-egl1 armhf 1.18.0-2~exp1.1 [8192 B] Get: 272 http://deb.debian.org/debian bullseye/main armhf libxcomposite1 armhf 1:0.4.5-1 [16.1 kB] Get: 273 http://deb.debian.org/debian bullseye/main armhf libxcursor1 armhf 1:1.2.0-2 [34.2 kB] Get: 274 http://deb.debian.org/debian bullseye/main armhf libxdamage1 armhf 1:1.1.5-2 [15.1 kB] Get: 275 http://deb.debian.org/debian bullseye/main armhf libxi6 armhf 2:1.7.10-1 [78.5 kB] Get: 276 http://deb.debian.org/debian bullseye/main armhf libxinerama1 armhf 2:1.1.4-2 [17.3 kB] Get: 277 http://deb.debian.org/debian bullseye/main armhf xkb-data all 2.29-2 [655 kB] Get: 278 http://deb.debian.org/debian bullseye/main armhf libxkbcommon0 armhf 1.0.3-2 [89.8 kB] Get: 279 http://deb.debian.org/debian bullseye/main armhf libxrandr2 armhf 2:1.5.1-1 [34.9 kB] Get: 280 http://deb.debian.org/debian bullseye/main armhf libgtk-3-common all 3.24.24-4 [3757 kB] Get: 281 http://deb.debian.org/debian bullseye/main armhf libgtk-3-0 armhf 3.24.24-4 [2338 kB] Get: 282 http://deb.debian.org/debian bullseye/main armhf libilmbase25 armhf 2.5.4-1 [195 kB] Get: 283 http://deb.debian.org/debian bullseye/main armhf libjs-jquery-isonscreen all 1.2.0-1.1 [3196 B] Get: 284 http://deb.debian.org/debian bullseye/main armhf libjs-jquery-metadata all 12-3 [7660 B] Get: 285 http://deb.debian.org/debian bullseye/main armhf libjs-jquery-tablesorter all 1:2.31.3+dfsg1-1 [185 kB] Get: 286 http://deb.debian.org/debian bullseye/main armhf libjs-jquery-throttle-debounce all 1.1+dfsg.1-1.1 [6412 B] Get: 287 http://deb.debian.org/debian bullseye/main armhf liblbfgsb0 armhf 3.0+dfsg.3-9 [24.9 kB] Get: 288 http://deb.debian.org/debian bullseye/main armhf liblept5 armhf 1.79.0-1.1 [924 kB] Get: 289 http://deb.debian.org/debian bullseye/main armhf libleveldb1d armhf 1.22-3 [126 kB] Get: 290 http://deb.debian.org/debian bullseye/main armhf liblmdb0 armhf 0.9.24-1 [38.7 kB] Get: 291 http://deb.debian.org/debian bullseye/main armhf libprotobuf23 armhf 3.12.4-1 [777 kB] Get: 292 http://deb.debian.org/debian bullseye/main armhf libonnx1 armhf 1.7.0+dfsg-3 [689 kB] Get: 293 http://deb.debian.org/debian bullseye/main armhf libtbb2 armhf 2020.3-1 [118 kB] Get: 294 http://deb.debian.org/debian bullseye/main armhf libopencv-core4.5 armhf 4.5.1+dfsg-5 [754 kB] Get: 295 http://deb.debian.org/debian bullseye/main armhf libopencv-flann4.5 armhf 4.5.1+dfsg-5 [109 kB] Get: 296 http://deb.debian.org/debian bullseye/main armhf libopencv-imgproc4.5 armhf 4.5.1+dfsg-5 [737 kB] Get: 297 http://deb.debian.org/debian bullseye/main armhf libopencv-features2d4.5 armhf 4.5.1+dfsg-5 [203 kB] Get: 298 http://deb.debian.org/debian bullseye/main armhf libopencv-calib3d4.5 armhf 4.5.1+dfsg-5 [549 kB] Get: 299 http://deb.debian.org/debian bullseye/main armhf libopencv-dnn4.5 armhf 4.5.1+dfsg-5 [767 kB] Get: 300 http://deb.debian.org/debian bullseye/main armhf libopenexr25 armhf 2.5.4-2 [608 kB] Get: 301 http://deb.debian.org/debian bullseye/main armhf libopencv-imgcodecs4.5 armhf 4.5.1+dfsg-5 [114 kB] Get: 302 http://deb.debian.org/debian bullseye/main armhf libopencv-highgui4.5 armhf 4.5.1+dfsg-5 [47.0 kB] Get: 303 http://deb.debian.org/debian bullseye/main armhf libopencv-ml4.5 armhf 4.5.1+dfsg-5 [179 kB] Get: 304 http://deb.debian.org/debian bullseye/main armhf libopencv-objdetect4.5 armhf 4.5.1+dfsg-5 [150 kB] Get: 305 http://deb.debian.org/debian bullseye/main armhf libopencv-video4.5 armhf 4.5.1+dfsg-5 [150 kB] Get: 306 http://deb.debian.org/debian bullseye/main armhf libtesseract4 armhf 4.1.1-2.1 [1107 kB] Get: 307 http://deb.debian.org/debian bullseye/main armhf libopencv-contrib4.5 armhf 4.5.1+dfsg-5 [2911 kB] Get: 308 http://deb.debian.org/debian bullseye/main armhf libswscale5 armhf 7:4.3.2-0+deb11u2 [171 kB] Get: 309 http://deb.debian.org/debian bullseye/main armhf libopencv-videoio4.5 armhf 4.5.1+dfsg-5 [166 kB] Get: 310 http://deb.debian.org/debian bullseye/main armhf libsleef3 armhf 3.5.1-1 [242 kB] Get: 311 http://deb.debian.org/debian bullseye/main armhf libtorch1.7 armhf 1.7.1-7 [13.0 MB] Get: 312 http://deb.debian.org/debian bullseye/main armhf libyaml-0-2 armhf 0.2.2-1 [42.0 kB] Get: 313 http://deb.debian.org/debian bullseye/main armhf python3-all armhf 3.9.2-3 [1056 B] Get: 314 http://deb.debian.org/debian bullseye/main armhf python3-attr all 20.3.0-1 [52.9 kB] Get: 315 http://deb.debian.org/debian bullseye/main armhf python3-certifi all 2020.6.20-1 [151 kB] Get: 316 http://deb.debian.org/debian bullseye/main armhf python3-pkg-resources all 52.0.0-4 [190 kB] Get: 317 http://deb.debian.org/debian bullseye/main armhf python3-chardet all 4.0.0-1 [99.0 kB] Get: 318 http://deb.debian.org/debian bullseye/main armhf python3-coverage armhf 5.1+dfsg.1-2+b2 [167 kB] Get: 319 http://deb.debian.org/debian bullseye/main armhf python3-six all 1.16.0-2 [17.5 kB] Get: 320 http://deb.debian.org/debian bullseye/main armhf python3-nose2 all 0.9.2-1 [94.1 kB] Get: 321 http://deb.debian.org/debian bullseye/main armhf python3-cov-core all 1.15.0-3 [7528 B] Get: 322 http://deb.debian.org/debian bullseye/main armhf python3-dateutil all 2.8.1-6 [79.2 kB] Get: 323 http://deb.debian.org/debian bullseye/main armhf python3-decorator all 4.4.2-2 [15.8 kB] Get: 324 http://deb.debian.org/debian bullseye/main armhf python3-flaky all 3.7.0-1 [20.1 kB] Get: 325 http://deb.debian.org/debian bullseye/main armhf python3-future all 0.18.2-5 [349 kB] Get: 326 http://deb.debian.org/debian bullseye/main armhf python3-idna all 2.10-1 [37.4 kB] Get: 327 http://deb.debian.org/debian bullseye/main armhf python3-more-itertools all 4.2.0-3 [42.7 kB] Get: 328 http://deb.debian.org/debian bullseye/main armhf python3-zipp all 1.0.0-3 [6060 B] Get: 329 http://deb.debian.org/debian bullseye/main armhf python3-importlib-metadata all 1.6.0-2 [10.3 kB] Get: 330 http://deb.debian.org/debian bullseye/main armhf python3-iniconfig all 1.1.1-1 [6308 B] Get: 331 http://deb.debian.org/debian bullseye/main armhf python3-joblib all 0.17.0-4 [213 kB] Get: 332 http://deb.debian.org/debian bullseye/main armhf python3-numpy armhf 1:1.19.5-1 [2981 kB] Get: 333 http://deb.debian.org/debian bullseye/main armhf python3-pyparsing all 2.4.7-1 [109 kB] Get: 334 http://deb.debian.org/debian bullseye/main armhf python3-packaging all 20.9-2 [33.5 kB] Get: 335 http://deb.debian.org/debian bullseye/main armhf python3-tz all 2021.1-1 [34.8 kB] Get: 336 http://deb.debian.org/debian bullseye/main armhf python3-pandas-lib armhf 1.1.5+dfsg-2 [3026 kB] Get: 337 http://deb.debian.org/debian bullseye/main armhf python3-pandas all 1.1.5+dfsg-2 [2096 kB] Get: 338 http://deb.debian.org/debian bullseye/main armhf python3-pluggy all 0.13.0-6 [22.3 kB] Get: 339 http://deb.debian.org/debian bullseye/main armhf python3-py all 1.10.0-1 [94.2 kB] Get: 340 http://deb.debian.org/debian bullseye/main armhf python3-toml all 0.10.1-1 [15.9 kB] Get: 341 http://deb.debian.org/debian bullseye/main armhf python3-pytest all 6.0.2-2 [211 kB] Get: 342 http://deb.debian.org/debian bullseye/main armhf python3-pytest-cov all 2.10.1-1 [23.5 kB] Get: 343 http://deb.debian.org/debian bullseye/main armhf python3-urllib3 all 1.26.5-1~exp1 [114 kB] Get: 344 http://deb.debian.org/debian bullseye/main armhf python3-requests all 2.25.1+dfsg-2 [69.3 kB] Get: 345 http://deb.debian.org/debian bullseye/main armhf python3-scipy armhf 1.6.0-2 [11.3 MB] Get: 346 http://deb.debian.org/debian bullseye/main armhf python3-setuptools all 52.0.0-4 [366 kB] Get: 347 http://deb.debian.org/debian bullseye/main armhf python3-sklearn-lib armhf 0.23.2-5 [1637 kB] Get: 348 http://deb.debian.org/debian bullseye/main armhf python3-threadpoolctl all 2.1.0-1 [15.3 kB] Get: 349 http://deb.debian.org/debian bullseye/main armhf python3-sklearn all 0.23.2-5 [1818 kB] Get: 350 http://deb.debian.org/debian bullseye/main armhf python3-tabulate all 0.8.7-0.1 [33.8 kB] Get: 351 http://deb.debian.org/debian bullseye/main armhf python3-typing-extensions all 3.7.4.3-1 [29.8 kB] Get: 352 http://deb.debian.org/debian bullseye/main armhf python3-yaml armhf 5.3.1-5 [129 kB] Get: 353 http://deb.debian.org/debian bullseye/main armhf python3-torch armhf 1.7.1-7 [5234 kB] Get: 354 http://deb.debian.org/debian bullseye/main armhf python3-tqdm all 4.57.0-2 [93.2 kB] Fetched 185 MB in 17s (11.0 MB/s) debconf: delaying package configuration, since apt-utils is not installed Selecting previously unselected package libapparmor1:armhf. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 19398 files and directories currently installed.) Preparing to unpack .../00-libapparmor1_2.13.6-10_armhf.deb ... Unpacking libapparmor1:armhf (2.13.6-10) ... Selecting previously unselected package libcap2:armhf. Preparing to unpack .../01-libcap2_1%3a2.44-1_armhf.deb ... Unpacking libcap2:armhf (1:2.44-1) ... Selecting previously unselected package libargon2-1:armhf. Preparing to unpack .../02-libargon2-1_0~20171227-0.2_armhf.deb ... Unpacking libargon2-1:armhf (0~20171227-0.2) ... Selecting previously unselected package dmsetup. Preparing to unpack .../03-dmsetup_2%3a1.02.175-2.1_armhf.deb ... Unpacking dmsetup (2:1.02.175-2.1) ... Selecting previously unselected package libdevmapper1.02.1:armhf. Preparing to unpack .../04-libdevmapper1.02.1_2%3a1.02.175-2.1_armhf.deb ... Unpacking libdevmapper1.02.1:armhf (2:1.02.175-2.1) ... Selecting previously unselected package libjson-c5:armhf. Preparing to unpack .../05-libjson-c5_0.15-2_armhf.deb ... Unpacking libjson-c5:armhf (0.15-2) ... Selecting previously unselected package libcryptsetup12:armhf. Preparing to unpack .../06-libcryptsetup12_2%3a2.3.5-1_armhf.deb ... Unpacking libcryptsetup12:armhf (2:2.3.5-1) ... Selecting previously unselected package libip4tc2:armhf. Preparing to unpack .../07-libip4tc2_1.8.7-1_armhf.deb ... Unpacking libip4tc2:armhf (1.8.7-1) ... Selecting previously unselected package libkmod2:armhf. Preparing to unpack .../08-libkmod2_28-1_armhf.deb ... Unpacking libkmod2:armhf (28-1) ... Selecting previously unselected package systemd-timesyncd. Preparing to unpack .../09-systemd-timesyncd_247.3-6_armhf.deb ... Unpacking systemd-timesyncd (247.3-6) ... Selecting previously unselected package systemd. Preparing to unpack .../10-systemd_247.3-6_armhf.deb ... Unpacking systemd (247.3-6) ... Setting up libapparmor1:armhf (2.13.6-10) ... Setting up libcap2:armhf (1:2.44-1) ... Setting up libargon2-1:armhf (0~20171227-0.2) ... Setting up libjson-c5:armhf (0.15-2) ... Setting up libip4tc2:armhf (1.8.7-1) ... Setting up libkmod2:armhf (28-1) ... Setting up libdevmapper1.02.1:armhf (2:1.02.175-2.1) ... Setting up libcryptsetup12:armhf (2:2.3.5-1) ... Setting up systemd-timesyncd (247.3-6) ... Created symlink /etc/systemd/system/dbus-org.freedesktop.timesync1.service -> /lib/systemd/system/systemd-timesyncd.service. Created symlink /etc/systemd/system/sysinit.target.wants/systemd-timesyncd.service -> /lib/systemd/system/systemd-timesyncd.service. Setting up systemd (247.3-6) ... Created symlink /etc/systemd/system/getty.target.wants/getty@tty1.service -> /lib/systemd/system/getty@.service. Created symlink /etc/systemd/system/multi-user.target.wants/remote-fs.target -> /lib/systemd/system/remote-fs.target. Created symlink /etc/systemd/system/sysinit.target.wants/systemd-pstore.service -> /lib/systemd/system/systemd-pstore.service. Initializing machine ID from random generator. Setting up dmsetup (2:1.02.175-2.1) ... Selecting previously unselected package systemd-sysv. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 20268 files and directories currently installed.) Preparing to unpack .../00-systemd-sysv_247.3-6_armhf.deb ... Unpacking systemd-sysv (247.3-6) ... Selecting previously unselected package libdbus-1-3:armhf. Preparing to unpack .../01-libdbus-1-3_1.12.20-2_armhf.deb ... Unpacking libdbus-1-3:armhf (1.12.20-2) ... Selecting previously unselected package libexpat1:armhf. Preparing to unpack .../02-libexpat1_2.2.10-2_armhf.deb ... Unpacking libexpat1:armhf (2.2.10-2) ... Selecting previously unselected package dbus. Preparing to unpack .../03-dbus_1.12.20-2_armhf.deb ... Unpacking dbus (1.12.20-2) ... Selecting previously unselected package bsdextrautils. Preparing to unpack .../04-bsdextrautils_2.36.1-8_armhf.deb ... Unpacking bsdextrautils (2.36.1-8) ... Selecting previously unselected package libuchardet0:armhf. Preparing to unpack .../05-libuchardet0_0.0.7-1_armhf.deb ... Unpacking libuchardet0:armhf (0.0.7-1) ... Selecting previously unselected package groff-base. Preparing to unpack .../06-groff-base_1.22.4-6_armhf.deb ... Unpacking groff-base (1.22.4-6) ... Selecting previously unselected package libpipeline1:armhf. Preparing to unpack .../07-libpipeline1_1.5.3-1_armhf.deb ... Unpacking libpipeline1:armhf (1.5.3-1) ... Selecting previously unselected package man-db. Preparing to unpack .../08-man-db_2.9.4-2_armhf.deb ... Unpacking man-db (2.9.4-2) ... Selecting previously unselected package libjs-jquery. Preparing to unpack .../09-libjs-jquery_3.5.1+dfsg+~3.5.5-7_all.deb ... Unpacking libjs-jquery (3.5.1+dfsg+~3.5.5-7) ... Selecting previously unselected package libjs-jquery-hotkeys. Preparing to unpack .../10-libjs-jquery-hotkeys_0~20130707+git2d51e3a9+dfsg-2.1_all.deb ... Unpacking libjs-jquery-hotkeys (0~20130707+git2d51e3a9+dfsg-2.1) ... Selecting previously unselected package libpython3.9-minimal:armhf. Preparing to unpack .../11-libpython3.9-minimal_3.9.2-1_armhf.deb ... Unpacking libpython3.9-minimal:armhf (3.9.2-1) ... Selecting previously unselected package python3.9-minimal. Preparing to unpack .../12-python3.9-minimal_3.9.2-1_armhf.deb ... Unpacking python3.9-minimal (3.9.2-1) ... Setting up libpython3.9-minimal:armhf (3.9.2-1) ... Setting up libexpat1:armhf (2.2.10-2) ... Setting up python3.9-minimal (3.9.2-1) ... Selecting previously unselected package python3-minimal. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 21229 files and directories currently installed.) Preparing to unpack .../0-python3-minimal_3.9.2-3_armhf.deb ... Unpacking python3-minimal (3.9.2-3) ... Selecting previously unselected package media-types. Preparing to unpack .../1-media-types_4.0.0_all.deb ... Unpacking media-types (4.0.0) ... Selecting previously unselected package libmpdec3:armhf. Preparing to unpack .../2-libmpdec3_2.5.1-1_armhf.deb ... Unpacking libmpdec3:armhf (2.5.1-1) ... Selecting previously unselected package readline-common. Preparing to unpack .../3-readline-common_8.1-1_all.deb ... Unpacking readline-common (8.1-1) ... Selecting previously unselected package libreadline8:armhf. Preparing to unpack .../4-libreadline8_8.1-1_armhf.deb ... Unpacking libreadline8:armhf (8.1-1) ... Selecting previously unselected package libpython3.9-stdlib:armhf. Preparing to unpack .../5-libpython3.9-stdlib_3.9.2-1_armhf.deb ... Unpacking libpython3.9-stdlib:armhf (3.9.2-1) ... Selecting previously unselected package python3.9. Preparing to unpack .../6-python3.9_3.9.2-1_armhf.deb ... Unpacking python3.9 (3.9.2-1) ... Selecting previously unselected package libpython3-stdlib:armhf. Preparing to unpack .../7-libpython3-stdlib_3.9.2-3_armhf.deb ... Unpacking libpython3-stdlib:armhf (3.9.2-3) ... Setting up python3-minimal (3.9.2-3) ... Selecting previously unselected package python3. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 21650 files and directories currently installed.) Preparing to unpack .../000-python3_3.9.2-3_armhf.deb ... Unpacking python3 (3.9.2-3) ... Selecting previously unselected package libncurses6:armhf. Preparing to unpack .../001-libncurses6_6.2+20201114-2_armhf.deb ... Unpacking libncurses6:armhf (6.2+20201114-2) ... Selecting previously unselected package libprocps8:armhf. Preparing to unpack .../002-libprocps8_2%3a3.3.17-5_armhf.deb ... Unpacking libprocps8:armhf (2:3.3.17-5) ... Selecting previously unselected package procps. Preparing to unpack .../003-procps_2%3a3.3.17-5_armhf.deb ... Unpacking procps (2:3.3.17-5) ... Selecting previously unselected package sensible-utils. Preparing to unpack .../004-sensible-utils_0.0.14_all.deb ... Unpacking sensible-utils (0.0.14) ... Selecting previously unselected package openssl. Preparing to unpack .../005-openssl_1.1.1k-1_armhf.deb ... Unpacking openssl (1.1.1k-1) ... Selecting previously unselected package ca-certificates. Preparing to unpack .../006-ca-certificates_20210119_all.deb ... Unpacking ca-certificates (20210119) ... Selecting previously unselected package libmagic-mgc. Preparing to unpack .../007-libmagic-mgc_1%3a5.39-3_armhf.deb ... Unpacking libmagic-mgc (1:5.39-3) ... Selecting previously unselected package libmagic1:armhf. Preparing to unpack .../008-libmagic1_1%3a5.39-3_armhf.deb ... Unpacking libmagic1:armhf (1:5.39-3) ... Selecting previously unselected package file. Preparing to unpack .../009-file_1%3a5.39-3_armhf.deb ... Unpacking file (1:5.39-3) ... Selecting previously unselected package gettext-base. Preparing to unpack .../010-gettext-base_0.21-4_armhf.deb ... Unpacking gettext-base (0.21-4) ... Selecting previously unselected package libpam-systemd:armhf. Preparing to unpack .../011-libpam-systemd_247.3-6_armhf.deb ... Unpacking libpam-systemd:armhf (247.3-6) ... Selecting previously unselected package ucf. Preparing to unpack .../012-ucf_3.0043_all.deb ... Moving old data out of the way Unpacking ucf (3.0043) ... Selecting previously unselected package hicolor-icon-theme. Preparing to unpack .../013-hicolor-icon-theme_0.17-2_all.deb ... Unpacking hicolor-icon-theme (0.17-2) ... Selecting previously unselected package libgdk-pixbuf2.0-common. Preparing to unpack .../014-libgdk-pixbuf2.0-common_2.42.2+dfsg-1_all.deb ... Unpacking libgdk-pixbuf2.0-common (2.42.2+dfsg-1) ... Selecting previously unselected package libglib2.0-0:armhf. Preparing to unpack .../015-libglib2.0-0_2.66.8-1_armhf.deb ... Unpacking libglib2.0-0:armhf (2.66.8-1) ... Selecting previously unselected package libicu67:armhf. Preparing to unpack .../016-libicu67_67.1-7_armhf.deb ... Unpacking libicu67:armhf (67.1-7) ... Selecting previously unselected package libxml2:armhf. Preparing to unpack .../017-libxml2_2.9.10+dfsg-6.7_armhf.deb ... Unpacking libxml2:armhf (2.9.10+dfsg-6.7) ... Selecting previously unselected package shared-mime-info. Preparing to unpack .../018-shared-mime-info_2.0-1_armhf.deb ... Unpacking shared-mime-info (2.0-1) ... Selecting previously unselected package libjpeg62-turbo:armhf. Preparing to unpack .../019-libjpeg62-turbo_1%3a2.0.6-4_armhf.deb ... Unpacking libjpeg62-turbo:armhf (1:2.0.6-4) ... Selecting previously unselected package libpng16-16:armhf. Preparing to unpack .../020-libpng16-16_1.6.37-3_armhf.deb ... Unpacking libpng16-16:armhf (1.6.37-3) ... Selecting previously unselected package libdeflate0:armhf. Preparing to unpack .../021-libdeflate0_1.7-1_armhf.deb ... Unpacking libdeflate0:armhf (1.7-1) ... Selecting previously unselected package libjbig0:armhf. Preparing to unpack .../022-libjbig0_2.1-3.1+b2_armhf.deb ... Unpacking libjbig0:armhf (2.1-3.1+b2) ... Selecting previously unselected package libwebp6:armhf. Preparing to unpack .../023-libwebp6_0.6.1-2.1_armhf.deb ... Unpacking libwebp6:armhf (0.6.1-2.1) ... Selecting previously unselected package libtiff5:armhf. Preparing to unpack .../024-libtiff5_4.2.0-1_armhf.deb ... Unpacking libtiff5:armhf (4.2.0-1) ... Selecting previously unselected package libgdk-pixbuf-2.0-0:armhf. Preparing to unpack .../025-libgdk-pixbuf-2.0-0_2.42.2+dfsg-1_armhf.deb ... Unpacking libgdk-pixbuf-2.0-0:armhf (2.42.2+dfsg-1) ... Selecting previously unselected package gtk-update-icon-cache. Preparing to unpack .../026-gtk-update-icon-cache_3.24.24-4_armhf.deb ... Unpacking gtk-update-icon-cache (3.24.24-4) ... Selecting previously unselected package adwaita-icon-theme. Preparing to unpack .../027-adwaita-icon-theme_3.38.0-1_all.deb ... Unpacking adwaita-icon-theme (3.38.0-1) ... Selecting previously unselected package libsigsegv2:armhf. Preparing to unpack .../028-libsigsegv2_2.13-1_armhf.deb ... Unpacking libsigsegv2:armhf (2.13-1) ... Selecting previously unselected package m4. Preparing to unpack .../029-m4_1.4.18-5_armhf.deb ... Unpacking m4 (1.4.18-5) ... Selecting previously unselected package autoconf. Preparing to unpack .../030-autoconf_2.69-14_all.deb ... Unpacking autoconf (2.69-14) ... Selecting previously unselected package autotools-dev. Preparing to unpack .../031-autotools-dev_20180224.1+nmu1_all.deb ... Unpacking autotools-dev (20180224.1+nmu1) ... Selecting previously unselected package automake. Preparing to unpack .../032-automake_1%3a1.16.3-2_all.deb ... Unpacking automake (1:1.16.3-2) ... Selecting previously unselected package autopoint. Preparing to unpack .../033-autopoint_0.21-4_all.deb ... Unpacking autopoint (0.21-4) ... Selecting previously unselected package dbus-user-session. Preparing to unpack .../034-dbus-user-session_1.12.20-2_armhf.deb ... Unpacking dbus-user-session (1.12.20-2) ... Selecting previously unselected package libdconf1:armhf. Preparing to unpack .../035-libdconf1_0.38.0-2_armhf.deb ... Unpacking libdconf1:armhf (0.38.0-2) ... Selecting previously unselected package dconf-service. Preparing to unpack .../036-dconf-service_0.38.0-2_armhf.deb ... Unpacking dconf-service (0.38.0-2) ... Selecting previously unselected package dconf-gsettings-backend:armhf. Preparing to unpack .../037-dconf-gsettings-backend_0.38.0-2_armhf.deb ... Unpacking dconf-gsettings-backend:armhf (0.38.0-2) ... Selecting previously unselected package libdebhelper-perl. Preparing to unpack .../038-libdebhelper-perl_13.3.4_all.deb ... Unpacking libdebhelper-perl (13.3.4) ... Selecting previously unselected package libtool. Preparing to unpack .../039-libtool_2.4.6-15_all.deb ... Unpacking libtool (2.4.6-15) ... Selecting previously unselected package dh-autoreconf. Preparing to unpack .../040-dh-autoreconf_20_all.deb ... Unpacking dh-autoreconf (20) ... Selecting previously unselected package libarchive-zip-perl. Preparing to unpack .../041-libarchive-zip-perl_1.68-1_all.deb ... Unpacking libarchive-zip-perl (1.68-1) ... Selecting previously unselected package libsub-override-perl. Preparing to unpack .../042-libsub-override-perl_0.09-2_all.deb ... Unpacking libsub-override-perl (0.09-2) ... Selecting previously unselected package libfile-stripnondeterminism-perl. Preparing to unpack .../043-libfile-stripnondeterminism-perl_1.12.0-1_all.deb ... Unpacking libfile-stripnondeterminism-perl (1.12.0-1) ... Selecting previously unselected package dh-strip-nondeterminism. Preparing to unpack .../044-dh-strip-nondeterminism_1.12.0-1_all.deb ... Unpacking dh-strip-nondeterminism (1.12.0-1) ... Selecting previously unselected package libelf1:armhf. Preparing to unpack .../045-libelf1_0.183-1_armhf.deb ... Unpacking libelf1:armhf (0.183-1) ... Selecting previously unselected package dwz. Preparing to unpack .../046-dwz_0.13+20210201-1_armhf.deb ... Unpacking dwz (0.13+20210201-1) ... Selecting previously unselected package gettext. Preparing to unpack .../047-gettext_0.21-4_armhf.deb ... Unpacking gettext (0.21-4) ... Selecting previously unselected package intltool-debian. Preparing to unpack .../048-intltool-debian_0.35.0+20060710.5_all.deb ... Unpacking intltool-debian (0.35.0+20060710.5) ... Selecting previously unselected package po-debconf. Preparing to unpack .../049-po-debconf_1.0.21+nmu1_all.deb ... Unpacking po-debconf (1.0.21+nmu1) ... Selecting previously unselected package debhelper. Preparing to unpack .../050-debhelper_13.3.4_all.deb ... Unpacking debhelper (13.3.4) ... Selecting previously unselected package python3-lib2to3. Preparing to unpack .../051-python3-lib2to3_3.9.2-1_all.deb ... Unpacking python3-lib2to3 (3.9.2-1) ... Selecting previously unselected package python3-distutils. Preparing to unpack .../052-python3-distutils_3.9.2-1_all.deb ... Unpacking python3-distutils (3.9.2-1) ... Selecting previously unselected package dh-python. Preparing to unpack .../053-dh-python_4.20201102+nmu1_all.deb ... Unpacking dh-python (4.20201102+nmu1) ... Selecting previously unselected package libbrotli1:armhf. Preparing to unpack .../054-libbrotli1_1.0.9-2+b2_armhf.deb ... Unpacking libbrotli1:armhf (1.0.9-2+b2) ... Selecting previously unselected package libfreetype6:armhf. Preparing to unpack .../055-libfreetype6_2.10.4+dfsg-1_armhf.deb ... Unpacking libfreetype6:armhf (2.10.4+dfsg-1) ... Selecting previously unselected package fonts-dejavu-core. Preparing to unpack .../056-fonts-dejavu-core_2.37-2_all.deb ... Unpacking fonts-dejavu-core (2.37-2) ... Selecting previously unselected package fontconfig-config. Preparing to unpack .../057-fontconfig-config_2.13.1-4.2_all.deb ... Unpacking fontconfig-config (2.13.1-4.2) ... Selecting previously unselected package libfontconfig1:armhf. Preparing to unpack .../058-libfontconfig1_2.13.1-4.2_armhf.deb ... Unpacking libfontconfig1:armhf (2.13.1-4.2) ... Selecting previously unselected package fontconfig. Preparing to unpack .../059-fontconfig_2.13.1-4.2_armhf.deb ... Unpacking fontconfig (2.13.1-4.2) ... Selecting previously unselected package gdal-data. Preparing to unpack .../060-gdal-data_3.2.2+dfsg-2_all.deb ... Unpacking gdal-data (3.2.2+dfsg-2) ... Selecting previously unselected package libproxy1v5:armhf. Preparing to unpack .../061-libproxy1v5_0.4.17-1_armhf.deb ... Unpacking libproxy1v5:armhf (0.4.17-1) ... Selecting previously unselected package glib-networking-common. Preparing to unpack .../062-glib-networking-common_2.66.0-2_all.deb ... Unpacking glib-networking-common (2.66.0-2) ... Selecting previously unselected package glib-networking-services. Preparing to unpack .../063-glib-networking-services_2.66.0-2_armhf.deb ... Unpacking glib-networking-services (2.66.0-2) ... Selecting previously unselected package gsettings-desktop-schemas. Preparing to unpack .../064-gsettings-desktop-schemas_3.38.0-2_all.deb ... Unpacking gsettings-desktop-schemas (3.38.0-2) ... Selecting previously unselected package glib-networking:armhf. Preparing to unpack .../065-glib-networking_2.66.0-2_armhf.deb ... Unpacking glib-networking:armhf (2.66.0-2) ... Selecting previously unselected package iso-codes. Preparing to unpack .../066-iso-codes_4.6.0-1_all.deb ... Unpacking iso-codes (4.6.0-1) ... Selecting previously unselected package libaec0:armhf. Preparing to unpack .../067-libaec0_1.0.4-1_armhf.deb ... Unpacking libaec0:armhf (1.0.4-1) ... Selecting previously unselected package libaom0:armhf. Preparing to unpack .../068-libaom0_1.0.0.errata1-3_armhf.deb ... Unpacking libaom0:armhf (1.0.0.errata1-3) ... Selecting previously unselected package libarchive13:armhf. Preparing to unpack .../069-libarchive13_3.4.3-2+b1_armhf.deb ... Unpacking libarchive13:armhf (3.4.3-2+b1) ... Selecting previously unselected package libblas3:armhf. Preparing to unpack .../070-libblas3_3.9.0-3_armhf.deb ... Unpacking libblas3:armhf (3.9.0-3) ... Selecting previously unselected package libgfortran5:armhf. Preparing to unpack .../071-libgfortran5_10.2.1-6_armhf.deb ... Unpacking libgfortran5:armhf (10.2.1-6) ... Selecting previously unselected package liblapack3:armhf. Preparing to unpack .../072-liblapack3_3.9.0-3_armhf.deb ... Unpacking liblapack3:armhf (3.9.0-3) ... Selecting previously unselected package libarpack2:armhf. Preparing to unpack .../073-libarpack2_3.8.0-1_armhf.deb ... Unpacking libarpack2:armhf (3.8.0-1) ... Selecting previously unselected package libsuperlu5:armhf. Preparing to unpack .../074-libsuperlu5_5.2.2+dfsg1-2_armhf.deb ... Unpacking libsuperlu5:armhf (5.2.2+dfsg1-2) ... Selecting previously unselected package libarmadillo10. Preparing to unpack .../075-libarmadillo10_1%3a10.1.2+dfsg-6_armhf.deb ... Unpacking libarmadillo10 (1:10.1.2+dfsg-6) ... Selecting previously unselected package libatk1.0-data. Preparing to unpack .../076-libatk1.0-data_2.36.0-2_all.deb ... Unpacking libatk1.0-data (2.36.0-2) ... Selecting previously unselected package libatk1.0-0:armhf. Preparing to unpack .../077-libatk1.0-0_2.36.0-2_armhf.deb ... Unpacking libatk1.0-0:armhf (2.36.0-2) ... Selecting previously unselected package libxau6:armhf. Preparing to unpack .../078-libxau6_1%3a1.0.9-1_armhf.deb ... Unpacking libxau6:armhf (1:1.0.9-1) ... Selecting previously unselected package libmd0:armhf. Preparing to unpack .../079-libmd0_1.0.3-3_armhf.deb ... Unpacking libmd0:armhf (1.0.3-3) ... Selecting previously unselected package libbsd0:armhf. Preparing to unpack .../080-libbsd0_0.11.3-1_armhf.deb ... Unpacking libbsd0:armhf (0.11.3-1) ... Selecting previously unselected package libxdmcp6:armhf. Preparing to unpack .../081-libxdmcp6_1%3a1.1.2-3_armhf.deb ... Unpacking libxdmcp6:armhf (1:1.1.2-3) ... Selecting previously unselected package libxcb1:armhf. Preparing to unpack .../082-libxcb1_1.14-3_armhf.deb ... Unpacking libxcb1:armhf (1.14-3) ... Selecting previously unselected package libx11-data. Preparing to unpack .../083-libx11-data_2%3a1.7.2-1_all.deb ... Unpacking libx11-data (2:1.7.2-1) ... Selecting previously unselected package libx11-6:armhf. Preparing to unpack .../084-libx11-6_2%3a1.7.2-1_armhf.deb ... Unpacking libx11-6:armhf (2:1.7.2-1) ... Selecting previously unselected package libatspi2.0-0:armhf. Preparing to unpack .../085-libatspi2.0-0_2.38.0-4_armhf.deb ... Unpacking libatspi2.0-0:armhf (2.38.0-4) ... Selecting previously unselected package libatk-bridge2.0-0:armhf. Preparing to unpack .../086-libatk-bridge2.0-0_2.38.0-1_armhf.deb ... Unpacking libatk-bridge2.0-0:armhf (2.38.0-1) ... Selecting previously unselected package libavahi-common-data:armhf. Preparing to unpack .../087-libavahi-common-data_0.8-5_armhf.deb ... Unpacking libavahi-common-data:armhf (0.8-5) ... Selecting previously unselected package libavahi-common3:armhf. Preparing to unpack .../088-libavahi-common3_0.8-5_armhf.deb ... Unpacking libavahi-common3:armhf (0.8-5) ... Selecting previously unselected package libavahi-client3:armhf. Preparing to unpack .../089-libavahi-client3_0.8-5_armhf.deb ... Unpacking libavahi-client3:armhf (0.8-5) ... Selecting previously unselected package libdrm-common. Preparing to unpack .../090-libdrm-common_2.4.104-1_all.deb ... Unpacking libdrm-common (2.4.104-1) ... Selecting previously unselected package libdrm2:armhf. Preparing to unpack .../091-libdrm2_2.4.104-1_armhf.deb ... Unpacking libdrm2:armhf (2.4.104-1) ... Selecting previously unselected package libva2:armhf. Preparing to unpack .../092-libva2_2.10.0-1_armhf.deb ... Unpacking libva2:armhf (2.10.0-1) ... Selecting previously unselected package libva-drm2:armhf. Preparing to unpack .../093-libva-drm2_2.10.0-1_armhf.deb ... Unpacking libva-drm2:armhf (2.10.0-1) ... Selecting previously unselected package libxext6:armhf. Preparing to unpack .../094-libxext6_2%3a1.3.3-1.1_armhf.deb ... Unpacking libxext6:armhf (2:1.3.3-1.1) ... Selecting previously unselected package libxfixes3:armhf. Preparing to unpack .../095-libxfixes3_1%3a5.0.3-2_armhf.deb ... Unpacking libxfixes3:armhf (1:5.0.3-2) ... Selecting previously unselected package libva-x11-2:armhf. Preparing to unpack .../096-libva-x11-2_2.10.0-1_armhf.deb ... Unpacking libva-x11-2:armhf (2.10.0-1) ... Selecting previously unselected package libvdpau1:armhf. Preparing to unpack .../097-libvdpau1_1.4-3_armhf.deb ... Unpacking libvdpau1:armhf (1.4-3) ... Selecting previously unselected package ocl-icd-libopencl1:armhf. Preparing to unpack .../098-ocl-icd-libopencl1_2.2.14-2_armhf.deb ... Unpacking ocl-icd-libopencl1:armhf (2.2.14-2) ... Selecting previously unselected package libavutil56:armhf. Preparing to unpack .../099-libavutil56_7%3a4.3.2-0+deb11u2_armhf.deb ... Unpacking libavutil56:armhf (7:4.3.2-0+deb11u2) ... Selecting previously unselected package libpixman-1-0:armhf. Preparing to unpack .../100-libpixman-1-0_0.40.0-1_armhf.deb ... Unpacking libpixman-1-0:armhf (0.40.0-1) ... Selecting previously unselected package libxcb-render0:armhf. Preparing to unpack .../101-libxcb-render0_1.14-3_armhf.deb ... Unpacking libxcb-render0:armhf (1.14-3) ... Selecting previously unselected package libxcb-shm0:armhf. Preparing to unpack .../102-libxcb-shm0_1.14-3_armhf.deb ... Unpacking libxcb-shm0:armhf (1.14-3) ... Selecting previously unselected package libxrender1:armhf. Preparing to unpack .../103-libxrender1_1%3a0.9.10-1_armhf.deb ... Unpacking libxrender1:armhf (1:0.9.10-1) ... Selecting previously unselected package libcairo2:armhf. Preparing to unpack .../104-libcairo2_1.16.0-5_armhf.deb ... Unpacking libcairo2:armhf (1.16.0-5) ... Selecting previously unselected package libcodec2-0.9:armhf. Preparing to unpack .../105-libcodec2-0.9_0.9.2-4_armhf.deb ... Unpacking libcodec2-0.9:armhf (0.9.2-4) ... Selecting previously unselected package libdav1d4:armhf. Preparing to unpack .../106-libdav1d4_0.7.1-3_armhf.deb ... Unpacking libdav1d4:armhf (0.7.1-3) ... Selecting previously unselected package libgsm1:armhf. Preparing to unpack .../107-libgsm1_1.0.18-2_armhf.deb ... Unpacking libgsm1:armhf (1.0.18-2) ... Selecting previously unselected package libmp3lame0:armhf. Preparing to unpack .../108-libmp3lame0_3.100-3_armhf.deb ... Unpacking libmp3lame0:armhf (3.100-3) ... Selecting previously unselected package libopenjp2-7:armhf. Preparing to unpack .../109-libopenjp2-7_2.4.0-3_armhf.deb ... Unpacking libopenjp2-7:armhf (2.4.0-3) ... Selecting previously unselected package libopus0:armhf. Preparing to unpack .../110-libopus0_1.3.1-0.1_armhf.deb ... Unpacking libopus0:armhf (1.3.1-0.1) ... Selecting previously unselected package libcairo-gobject2:armhf. Preparing to unpack .../111-libcairo-gobject2_1.16.0-5_armhf.deb ... Unpacking libcairo-gobject2:armhf (1.16.0-5) ... Selecting previously unselected package libfribidi0:armhf. Preparing to unpack .../112-libfribidi0_1.0.8-2_armhf.deb ... Unpacking libfribidi0:armhf (1.0.8-2) ... Selecting previously unselected package libgraphite2-3:armhf. Preparing to unpack .../113-libgraphite2-3_1.3.14-1_armhf.deb ... Unpacking libgraphite2-3:armhf (1.3.14-1) ... Selecting previously unselected package libharfbuzz0b:armhf. Preparing to unpack .../114-libharfbuzz0b_2.7.4-1_armhf.deb ... Unpacking libharfbuzz0b:armhf (2.7.4-1) ... Selecting previously unselected package libthai-data. Preparing to unpack .../115-libthai-data_0.1.28-3_all.deb ... Unpacking libthai-data (0.1.28-3) ... Selecting previously unselected package libdatrie1:armhf. Preparing to unpack .../116-libdatrie1_0.2.13-1_armhf.deb ... Unpacking libdatrie1:armhf (0.2.13-1) ... Selecting previously unselected package libthai0:armhf. Preparing to unpack .../117-libthai0_0.1.28-3_armhf.deb ... Unpacking libthai0:armhf (0.1.28-3) ... Selecting previously unselected package libpango-1.0-0:armhf. Preparing to unpack .../118-libpango-1.0-0_1.46.2-3_armhf.deb ... Unpacking libpango-1.0-0:armhf (1.46.2-3) ... Selecting previously unselected package libpangoft2-1.0-0:armhf. Preparing to unpack .../119-libpangoft2-1.0-0_1.46.2-3_armhf.deb ... Unpacking libpangoft2-1.0-0:armhf (1.46.2-3) ... Selecting previously unselected package libpangocairo-1.0-0:armhf. Preparing to unpack .../120-libpangocairo-1.0-0_1.46.2-3_armhf.deb ... Unpacking libpangocairo-1.0-0:armhf (1.46.2-3) ... Selecting previously unselected package librsvg2-2:armhf. Preparing to unpack .../121-librsvg2-2_2.50.3+dfsg-1_armhf.deb ... Unpacking librsvg2-2:armhf (2.50.3+dfsg-1) ... Selecting previously unselected package libshine3:armhf. Preparing to unpack .../122-libshine3_3.1.1-2_armhf.deb ... Unpacking libshine3:armhf (3.1.1-2) ... Selecting previously unselected package libsnappy1v5:armhf. Preparing to unpack .../123-libsnappy1v5_1.1.8-1_armhf.deb ... Unpacking libsnappy1v5:armhf (1.1.8-1) ... Selecting previously unselected package libspeex1:armhf. Preparing to unpack .../124-libspeex1_1.2~rc1.2-1.1_armhf.deb ... Unpacking libspeex1:armhf (1.2~rc1.2-1.1) ... Selecting previously unselected package libsoxr0:armhf. Preparing to unpack .../125-libsoxr0_0.1.3-4_armhf.deb ... Unpacking libsoxr0:armhf (0.1.3-4) ... Selecting previously unselected package libswresample3:armhf. Preparing to unpack .../126-libswresample3_7%3a4.3.2-0+deb11u2_armhf.deb ... Unpacking libswresample3:armhf (7:4.3.2-0+deb11u2) ... Selecting previously unselected package libogg0:armhf. Preparing to unpack .../127-libogg0_1.3.4-0.1_armhf.deb ... Unpacking libogg0:armhf (1.3.4-0.1) ... Selecting previously unselected package libtheora0:armhf. Preparing to unpack .../128-libtheora0_1.1.1+dfsg.1-15_armhf.deb ... Unpacking libtheora0:armhf (1.1.1+dfsg.1-15) ... Selecting previously unselected package libtwolame0:armhf. Preparing to unpack .../129-libtwolame0_0.4.0-2_armhf.deb ... Unpacking libtwolame0:armhf (0.4.0-2) ... Selecting previously unselected package libvorbis0a:armhf. Preparing to unpack .../130-libvorbis0a_1.3.7-1_armhf.deb ... Unpacking libvorbis0a:armhf (1.3.7-1) ... Selecting previously unselected package libvorbisenc2:armhf. Preparing to unpack .../131-libvorbisenc2_1.3.7-1_armhf.deb ... Unpacking libvorbisenc2:armhf (1.3.7-1) ... Selecting previously unselected package libvpx6:armhf. Preparing to unpack .../132-libvpx6_1.9.0-1_armhf.deb ... Unpacking libvpx6:armhf (1.9.0-1) ... Selecting previously unselected package libwavpack1:armhf. Preparing to unpack .../133-libwavpack1_5.4.0-1_armhf.deb ... Unpacking libwavpack1:armhf (5.4.0-1) ... Selecting previously unselected package libwebpmux3:armhf. Preparing to unpack .../134-libwebpmux3_0.6.1-2.1_armhf.deb ... Unpacking libwebpmux3:armhf (0.6.1-2.1) ... Selecting previously unselected package libx264-160:armhf. Preparing to unpack .../135-libx264-160_2%3a0.160.3011+gitcde9a93-2.1_armhf.deb ... Unpacking libx264-160:armhf (2:0.160.3011+gitcde9a93-2.1) ... Selecting previously unselected package libx265-192:armhf. Preparing to unpack .../136-libx265-192_3.4-2_armhf.deb ... Unpacking libx265-192:armhf (3.4-2) ... Selecting previously unselected package libxvidcore4:armhf. Preparing to unpack .../137-libxvidcore4_2%3a1.3.7-1_armhf.deb ... Unpacking libxvidcore4:armhf (2:1.3.7-1) ... Selecting previously unselected package libzvbi-common. Preparing to unpack .../138-libzvbi-common_0.2.35-18_all.deb ... Unpacking libzvbi-common (0.2.35-18) ... Selecting previously unselected package libzvbi0:armhf. Preparing to unpack .../139-libzvbi0_0.2.35-18_armhf.deb ... Unpacking libzvbi0:armhf (0.2.35-18) ... Selecting previously unselected package libavcodec58:armhf. Preparing to unpack .../140-libavcodec58_7%3a4.3.2-0+deb11u2_armhf.deb ... Unpacking libavcodec58:armhf (7:4.3.2-0+deb11u2) ... Selecting previously unselected package libudfread0:armhf. Preparing to unpack .../141-libudfread0_1.1.1-1_armhf.deb ... Unpacking libudfread0:armhf (1.1.1-1) ... Selecting previously unselected package libbluray2:armhf. Preparing to unpack .../142-libbluray2_1%3a1.2.1-4_armhf.deb ... Unpacking libbluray2:armhf (1:1.2.1-4) ... Selecting previously unselected package libchromaprint1:armhf. Preparing to unpack .../143-libchromaprint1_1.5.0-2_armhf.deb ... Unpacking libchromaprint1:armhf (1.5.0-2) ... Selecting previously unselected package libgme0:armhf. Preparing to unpack .../144-libgme0_0.6.3-2_armhf.deb ... Unpacking libgme0:armhf (0.6.3-2) ... Selecting previously unselected package libmpg123-0:armhf. Preparing to unpack .../145-libmpg123-0_1.26.4-1_armhf.deb ... Unpacking libmpg123-0:armhf (1.26.4-1) ... Selecting previously unselected package libvorbisfile3:armhf. Preparing to unpack .../146-libvorbisfile3_1.3.7-1_armhf.deb ... Unpacking libvorbisfile3:armhf (1.3.7-1) ... Selecting previously unselected package libopenmpt0:armhf. Preparing to unpack .../147-libopenmpt0_0.4.11-1_armhf.deb ... Unpacking libopenmpt0:armhf (0.4.11-1) ... Selecting previously unselected package librabbitmq4:armhf. Preparing to unpack .../148-librabbitmq4_0.10.0-1_armhf.deb ... Unpacking librabbitmq4:armhf (0.10.0-1) ... Selecting previously unselected package libsrt1.4-gnutls:armhf. Preparing to unpack .../149-libsrt1.4-gnutls_1.4.2-1.3_armhf.deb ... Unpacking libsrt1.4-gnutls:armhf (1.4.2-1.3) ... Selecting previously unselected package libssh-gcrypt-4:armhf. Preparing to unpack .../150-libssh-gcrypt-4_0.9.5-1_armhf.deb ... Unpacking libssh-gcrypt-4:armhf (0.9.5-1) ... Selecting previously unselected package libnorm1:armhf. Preparing to unpack .../151-libnorm1_1.5.9+dfsg-2_armhf.deb ... Unpacking libnorm1:armhf (1.5.9+dfsg-2) ... Selecting previously unselected package libpgm-5.3-0:armhf. Preparing to unpack .../152-libpgm-5.3-0_5.3.128~dfsg-2_armhf.deb ... Unpacking libpgm-5.3-0:armhf (5.3.128~dfsg-2) ... Selecting previously unselected package libsodium23:armhf. Preparing to unpack .../153-libsodium23_1.0.18-1_armhf.deb ... Unpacking libsodium23:armhf (1.0.18-1) ... Selecting previously unselected package libzmq5:armhf. Preparing to unpack .../154-libzmq5_4.3.4-1_armhf.deb ... Unpacking libzmq5:armhf (4.3.4-1) ... Selecting previously unselected package libavformat58:armhf. Preparing to unpack .../155-libavformat58_7%3a4.3.2-0+deb11u2_armhf.deb ... Unpacking libavformat58:armhf (7:4.3.2-0+deb11u2) ... Selecting previously unselected package libcap2-bin. Preparing to unpack .../156-libcap2-bin_1%3a2.44-1_armhf.deb ... Unpacking libcap2-bin (1:2.44-1) ... Selecting previously unselected package libsasl2-modules-db:armhf. Preparing to unpack .../157-libsasl2-modules-db_2.1.27+dfsg-2.1_armhf.deb ... Unpacking libsasl2-modules-db:armhf (2.1.27+dfsg-2.1) ... Selecting previously unselected package libsasl2-2:armhf. Preparing to unpack .../158-libsasl2-2_2.1.27+dfsg-2.1_armhf.deb ... Unpacking libsasl2-2:armhf (2.1.27+dfsg-2.1) ... Selecting previously unselected package libldap-2.4-2:armhf. Preparing to unpack .../159-libldap-2.4-2_2.4.57+dfsg-3_armhf.deb ... Unpacking libldap-2.4-2:armhf (2.4.57+dfsg-3) ... Selecting previously unselected package libnghttp2-14:armhf. Preparing to unpack .../160-libnghttp2-14_1.43.0-1_armhf.deb ... Unpacking libnghttp2-14:armhf (1.43.0-1) ... Selecting previously unselected package libpsl5:armhf. Preparing to unpack .../161-libpsl5_0.21.0-1.2_armhf.deb ... Unpacking libpsl5:armhf (0.21.0-1.2) ... Selecting previously unselected package librtmp1:armhf. Preparing to unpack .../162-librtmp1_2.4+20151223.gitfa8646d.1-2+b2_armhf.deb ... Unpacking librtmp1:armhf (2.4+20151223.gitfa8646d.1-2+b2) ... Selecting previously unselected package libssh2-1:armhf. Preparing to unpack .../163-libssh2-1_1.9.0-2_armhf.deb ... Unpacking libssh2-1:armhf (1.9.0-2) ... Selecting previously unselected package libcurl3-gnutls:armhf. Preparing to unpack .../164-libcurl3-gnutls_7.74.0-1.3+b1_armhf.deb ... Unpacking libcurl3-gnutls:armhf (7.74.0-1.3+b1) ... Selecting previously unselected package libcfitsio9:armhf. Preparing to unpack .../165-libcfitsio9_3.490-3_armhf.deb ... Unpacking libcfitsio9:armhf (3.490-3) ... Selecting previously unselected package libcharls2:armhf. Preparing to unpack .../166-libcharls2_2.2.0+dfsg-2_armhf.deb ... Unpacking libcharls2:armhf (2.2.0+dfsg-2) ... Selecting previously unselected package liblcms2-2:armhf. Preparing to unpack .../167-liblcms2-2_2.12~rc1-2_armhf.deb ... Unpacking liblcms2-2:armhf (2.12~rc1-2) ... Selecting previously unselected package libcolord2:armhf. Preparing to unpack .../168-libcolord2_1.4.5-3_armhf.deb ... Unpacking libcolord2:armhf (1.4.5-3) ... Selecting previously unselected package libcpuinfo0:armhf. Preparing to unpack .../169-libcpuinfo0_0.0~git20200612.63b2545-2_armhf.deb ... Unpacking libcpuinfo0:armhf (0.0~git20200612.63b2545-2) ... Selecting previously unselected package libcups2:armhf. Preparing to unpack .../170-libcups2_2.3.3op2-3+deb11u1_armhf.deb ... Unpacking libcups2:armhf (2.3.3op2-3+deb11u1) ... Selecting previously unselected package libcurl4:armhf. Preparing to unpack .../171-libcurl4_7.74.0-1.3+b1_armhf.deb ... Unpacking libcurl4:armhf (7.74.0-1.3+b1) ... Selecting previously unselected package libdap27:armhf. Preparing to unpack .../172-libdap27_3.20.7-6_armhf.deb ... Unpacking libdap27:armhf (3.20.7-6) ... Selecting previously unselected package libdapclient6v5:armhf. Preparing to unpack .../173-libdapclient6v5_3.20.7-6_armhf.deb ... Unpacking libdapclient6v5:armhf (3.20.7-6) ... Selecting previously unselected package libraw1394-11:armhf. Preparing to unpack .../174-libraw1394-11_2.1.2-2_armhf.deb ... Unpacking libraw1394-11:armhf (2.1.2-2) ... Selecting previously unselected package libusb-1.0-0:armhf. Preparing to unpack .../175-libusb-1.0-0_2%3a1.0.24-3_armhf.deb ... Unpacking libusb-1.0-0:armhf (2:1.0.24-3) ... Selecting previously unselected package libdc1394-25:armhf. Preparing to unpack .../176-libdc1394-25_2.2.6-3_armhf.deb ... Unpacking libdc1394-25:armhf (2.2.6-3) ... Selecting previously unselected package libde265-0:armhf. Preparing to unpack .../177-libde265-0_1.0.8-1_armhf.deb ... Unpacking libde265-0:armhf (1.0.8-1) ... Selecting previously unselected package libdw1:armhf. Preparing to unpack .../178-libdw1_0.183-1_armhf.deb ... Unpacking libdw1:armhf (0.183-1) ... Selecting previously unselected package libepoxy0:armhf. Preparing to unpack .../179-libepoxy0_1.5.5-1_armhf.deb ... Unpacking libepoxy0:armhf (1.5.5-1) ... Selecting previously unselected package libepsilon1:armhf. Preparing to unpack .../180-libepsilon1_0.9.2+dfsg-5_armhf.deb ... Unpacking libepsilon1:armhf (0.9.2+dfsg-5) ... Selecting previously unselected package libexif12:armhf. Preparing to unpack .../181-libexif12_0.6.22-3_armhf.deb ... Unpacking libexif12:armhf (0.6.22-3) ... Selecting previously unselected package libfmt7:armhf. Preparing to unpack .../182-libfmt7_7.1.3+ds1-5_armhf.deb ... Unpacking libfmt7:armhf (7.1.3+ds1-5) ... Selecting previously unselected package libfreexl1:armhf. Preparing to unpack .../183-libfreexl1_1.0.6-1_armhf.deb ... Unpacking libfreexl1:armhf (1.0.6-1) ... Selecting previously unselected package libfyba0:armhf. Preparing to unpack .../184-libfyba0_4.1.1-7_armhf.deb ... Unpacking libfyba0:armhf (4.1.1-7) ... Selecting previously unselected package libxpm4:armhf. Preparing to unpack .../185-libxpm4_1%3a3.5.12-1_armhf.deb ... Unpacking libxpm4:armhf (1:3.5.12-1) ... Selecting previously unselected package libgd3:armhf. Preparing to unpack .../186-libgd3_2.3.0-2_armhf.deb ... Unpacking libgd3:armhf (2.3.0-2) ... Selecting previously unselected package libgeos-3.9.0:armhf. Preparing to unpack .../187-libgeos-3.9.0_3.9.0-1_armhf.deb ... Unpacking libgeos-3.9.0:armhf (3.9.0-1) ... Selecting previously unselected package libgeos-c1v5:armhf. Preparing to unpack .../188-libgeos-c1v5_3.9.0-1_armhf.deb ... Unpacking libgeos-c1v5:armhf (3.9.0-1) ... Selecting previously unselected package proj-data. Preparing to unpack .../189-proj-data_7.2.1-1_all.deb ... Unpacking proj-data (7.2.1-1) ... Selecting previously unselected package libproj19:armhf. Preparing to unpack .../190-libproj19_7.2.1-1_armhf.deb ... Unpacking libproj19:armhf (7.2.1-1) ... Selecting previously unselected package libgeotiff5:armhf. Preparing to unpack .../191-libgeotiff5_1.6.0-1_armhf.deb ... Unpacking libgeotiff5:armhf (1.6.0-1) ... Selecting previously unselected package libgif7:armhf. Preparing to unpack .../192-libgif7_5.1.9-2_armhf.deb ... Unpacking libgif7:armhf (5.1.9-2) ... Selecting previously unselected package libhdf4-0-alt. Preparing to unpack .../193-libhdf4-0-alt_4.2.15-3_armhf.deb ... Unpacking libhdf4-0-alt (4.2.15-3) ... Selecting previously unselected package libsz2:armhf. Preparing to unpack .../194-libsz2_1.0.4-1_armhf.deb ... Unpacking libsz2:armhf (1.0.4-1) ... Selecting previously unselected package libhdf5-103-1:armhf. Preparing to unpack .../195-libhdf5-103-1_1.10.6+repack-4_armhf.deb ... Unpacking libhdf5-103-1:armhf (1.10.6+repack-4) ... Selecting previously unselected package libheif1:armhf. Preparing to unpack .../196-libheif1_1.11.0-1_armhf.deb ... Unpacking libheif1:armhf (1.11.0-1) ... Selecting previously unselected package libminizip1:armhf. Preparing to unpack .../197-libminizip1_1.1-8+b1_armhf.deb ... Unpacking libminizip1:armhf (1.1-8+b1) ... Selecting previously unselected package liburiparser1:armhf. Preparing to unpack .../198-liburiparser1_0.9.4+dfsg-1_armhf.deb ... Unpacking liburiparser1:armhf (0.9.4+dfsg-1) ... Selecting previously unselected package libkmlbase1:armhf. Preparing to unpack .../199-libkmlbase1_1.3.0-9_armhf.deb ... Unpacking libkmlbase1:armhf (1.3.0-9) ... Selecting previously unselected package libkmldom1:armhf. Preparing to unpack .../200-libkmldom1_1.3.0-9_armhf.deb ... Unpacking libkmldom1:armhf (1.3.0-9) ... Selecting previously unselected package libkmlengine1:armhf. Preparing to unpack .../201-libkmlengine1_1.3.0-9_armhf.deb ... Unpacking libkmlengine1:armhf (1.3.0-9) ... Selecting previously unselected package mysql-common. Preparing to unpack .../202-mysql-common_5.8+1.0.7_all.deb ... Unpacking mysql-common (5.8+1.0.7) ... Selecting previously unselected package mariadb-common. Preparing to unpack .../203-mariadb-common_1%3a10.5.11-1_all.deb ... Unpacking mariadb-common (1:10.5.11-1) ... Selecting previously unselected package libmariadb3:armhf. Preparing to unpack .../204-libmariadb3_1%3a10.5.11-1_armhf.deb ... Unpacking libmariadb3:armhf (1:10.5.11-1) ... Selecting previously unselected package libhdf5-hl-100:armhf. Preparing to unpack .../205-libhdf5-hl-100_1.10.6+repack-4_armhf.deb ... Unpacking libhdf5-hl-100:armhf (1.10.6+repack-4) ... Selecting previously unselected package libnetcdf18:armhf. Preparing to unpack .../206-libnetcdf18_1%3a4.7.4-1_armhf.deb ... Unpacking libnetcdf18:armhf (1:4.7.4-1) ... Selecting previously unselected package libltdl7:armhf. Preparing to unpack .../207-libltdl7_2.4.6-15_armhf.deb ... Unpacking libltdl7:armhf (2.4.6-15) ... Selecting previously unselected package libodbc1:armhf. Preparing to unpack .../208-libodbc1_2.3.6-0.1+b1_armhf.deb ... Unpacking libodbc1:armhf (2.3.6-0.1+b1) ... Selecting previously unselected package libogdi4.1. Preparing to unpack .../209-libogdi4.1_4.1.0+ds-5_armhf.deb ... Unpacking libogdi4.1 (4.1.0+ds-5) ... Selecting previously unselected package libnspr4:armhf. Preparing to unpack .../210-libnspr4_2%3a4.29-1_armhf.deb ... Unpacking libnspr4:armhf (2:4.29-1) ... Selecting previously unselected package libnss3:armhf. Preparing to unpack .../211-libnss3_2%3a3.61-1_armhf.deb ... Unpacking libnss3:armhf (2:3.61-1) ... Selecting previously unselected package libpoppler102:armhf. Preparing to unpack .../212-libpoppler102_20.09.0-3.1_armhf.deb ... Unpacking libpoppler102:armhf (20.09.0-3.1) ... Selecting previously unselected package libpq5:armhf. Preparing to unpack .../213-libpq5_13.3-1_armhf.deb ... Unpacking libpq5:armhf (13.3-1) ... Selecting previously unselected package libqhull8.0:armhf. Preparing to unpack .../214-libqhull8.0_2020.2-3_armhf.deb ... Unpacking libqhull8.0:armhf (2020.2-3) ... Selecting previously unselected package librttopo1:armhf. Preparing to unpack .../215-librttopo1_1.1.0-2_armhf.deb ... Unpacking librttopo1:armhf (1.1.0-2) ... Selecting previously unselected package libspatialite7:armhf. Preparing to unpack .../216-libspatialite7_5.0.1-2_armhf.deb ... Unpacking libspatialite7:armhf (5.0.1-2) ... Selecting previously unselected package libxerces-c3.2:armhf. Preparing to unpack .../217-libxerces-c3.2_3.2.3+debian-3_armhf.deb ... Unpacking libxerces-c3.2:armhf (3.2.3+debian-3) ... Selecting previously unselected package odbcinst. Preparing to unpack .../218-odbcinst_2.3.6-0.1+b1_armhf.deb ... Unpacking odbcinst (2.3.6-0.1+b1) ... Selecting previously unselected package odbcinst1debian2:armhf. Preparing to unpack .../219-odbcinst1debian2_2.3.6-0.1+b1_armhf.deb ... Unpacking odbcinst1debian2:armhf (2.3.6-0.1+b1) ... Selecting previously unselected package libgdal28. Preparing to unpack .../220-libgdal28_3.2.2+dfsg-2_armhf.deb ... Unpacking libgdal28 (3.2.2+dfsg-2) ... Selecting previously unselected package libsocket++1:armhf. Preparing to unpack .../221-libsocket++1_1.12.13-11_armhf.deb ... Unpacking libsocket++1:armhf (1.12.13-11) ... Selecting previously unselected package libgdcm3.0:armhf. Preparing to unpack .../222-libgdcm3.0_3.0.8-2_armhf.deb ... Unpacking libgdcm3.0:armhf (3.0.8-2) ... Selecting previously unselected package libgflags2.2. Preparing to unpack .../223-libgflags2.2_2.2.2-2_armhf.deb ... Unpacking libgflags2.2 (2.2.2-2) ... Selecting previously unselected package libgoogle-glog0v5. Preparing to unpack .../224-libgoogle-glog0v5_0.4.0-4_armhf.deb ... Unpacking libgoogle-glog0v5 (0.4.0-4) ... Selecting previously unselected package libgphoto2-port12:armhf. Preparing to unpack .../225-libgphoto2-port12_2.5.27-1_armhf.deb ... Unpacking libgphoto2-port12:armhf (2.5.27-1) ... Selecting previously unselected package libgphoto2-6:armhf. Preparing to unpack .../226-libgphoto2-6_2.5.27-1_armhf.deb ... Unpacking libgphoto2-6:armhf (2.5.27-1) ... Selecting previously unselected package libunwind8:armhf. Preparing to unpack .../227-libunwind8_1.3.2-2_armhf.deb ... Unpacking libunwind8:armhf (1.3.2-2) ... Selecting previously unselected package libgstreamer1.0-0:armhf. Preparing to unpack .../228-libgstreamer1.0-0_1.18.4-2.1_armhf.deb ... Unpacking libgstreamer1.0-0:armhf (1.18.4-2.1) ... Selecting previously unselected package liborc-0.4-0:armhf. Preparing to unpack .../229-liborc-0.4-0_1%3a0.4.32-1_armhf.deb ... Unpacking liborc-0.4-0:armhf (1:0.4.32-1) ... Selecting previously unselected package libgstreamer-plugins-base1.0-0:armhf. Preparing to unpack .../230-libgstreamer-plugins-base1.0-0_1.18.4-2_armhf.deb ... Unpacking libgstreamer-plugins-base1.0-0:armhf (1.18.4-2) ... Selecting previously unselected package libjson-glib-1.0-common. Preparing to unpack .../231-libjson-glib-1.0-common_1.6.2-1_all.deb ... Unpacking libjson-glib-1.0-common (1.6.2-1) ... Selecting previously unselected package libjson-glib-1.0-0:armhf. Preparing to unpack .../232-libjson-glib-1.0-0_1.6.2-1_armhf.deb ... Unpacking libjson-glib-1.0-0:armhf (1.6.2-1) ... Selecting previously unselected package libsoup2.4-1:armhf. Preparing to unpack .../233-libsoup2.4-1_2.72.0-2_armhf.deb ... Unpacking libsoup2.4-1:armhf (2.72.0-2) ... Selecting previously unselected package libsoup-gnome2.4-1:armhf. Preparing to unpack .../234-libsoup-gnome2.4-1_2.72.0-2_armhf.deb ... Unpacking libsoup-gnome2.4-1:armhf (2.72.0-2) ... Selecting previously unselected package librest-0.7-0:armhf. Preparing to unpack .../235-librest-0.7-0_0.8.1-1.1_armhf.deb ... Unpacking librest-0.7-0:armhf (0.8.1-1.1) ... Selecting previously unselected package libwayland-client0:armhf. Preparing to unpack .../236-libwayland-client0_1.18.0-2~exp1.1_armhf.deb ... Unpacking libwayland-client0:armhf (1.18.0-2~exp1.1) ... Selecting previously unselected package libwayland-cursor0:armhf. Preparing to unpack .../237-libwayland-cursor0_1.18.0-2~exp1.1_armhf.deb ... Unpacking libwayland-cursor0:armhf (1.18.0-2~exp1.1) ... Selecting previously unselected package libwayland-egl1:armhf. Preparing to unpack .../238-libwayland-egl1_1.18.0-2~exp1.1_armhf.deb ... Unpacking libwayland-egl1:armhf (1.18.0-2~exp1.1) ... Selecting previously unselected package libxcomposite1:armhf. Preparing to unpack .../239-libxcomposite1_1%3a0.4.5-1_armhf.deb ... Unpacking libxcomposite1:armhf (1:0.4.5-1) ... Selecting previously unselected package libxcursor1:armhf. Preparing to unpack .../240-libxcursor1_1%3a1.2.0-2_armhf.deb ... Unpacking libxcursor1:armhf (1:1.2.0-2) ... Selecting previously unselected package libxdamage1:armhf. Preparing to unpack .../241-libxdamage1_1%3a1.1.5-2_armhf.deb ... Unpacking libxdamage1:armhf (1:1.1.5-2) ... Selecting previously unselected package libxi6:armhf. Preparing to unpack .../242-libxi6_2%3a1.7.10-1_armhf.deb ... Unpacking libxi6:armhf (2:1.7.10-1) ... Selecting previously unselected package libxinerama1:armhf. Preparing to unpack .../243-libxinerama1_2%3a1.1.4-2_armhf.deb ... Unpacking libxinerama1:armhf (2:1.1.4-2) ... Selecting previously unselected package xkb-data. Preparing to unpack .../244-xkb-data_2.29-2_all.deb ... Unpacking xkb-data (2.29-2) ... Selecting previously unselected package libxkbcommon0:armhf. Preparing to unpack .../245-libxkbcommon0_1.0.3-2_armhf.deb ... Unpacking libxkbcommon0:armhf (1.0.3-2) ... Selecting previously unselected package libxrandr2:armhf. Preparing to unpack .../246-libxrandr2_2%3a1.5.1-1_armhf.deb ... Unpacking libxrandr2:armhf (2:1.5.1-1) ... Selecting previously unselected package libgtk-3-common. Preparing to unpack .../247-libgtk-3-common_3.24.24-4_all.deb ... Unpacking libgtk-3-common (3.24.24-4) ... Selecting previously unselected package libgtk-3-0:armhf. Preparing to unpack .../248-libgtk-3-0_3.24.24-4_armhf.deb ... Unpacking libgtk-3-0:armhf (3.24.24-4) ... Selecting previously unselected package libilmbase25:armhf. Preparing to unpack .../249-libilmbase25_2.5.4-1_armhf.deb ... Unpacking libilmbase25:armhf (2.5.4-1) ... Selecting previously unselected package libjs-jquery-isonscreen. Preparing to unpack .../250-libjs-jquery-isonscreen_1.2.0-1.1_all.deb ... Unpacking libjs-jquery-isonscreen (1.2.0-1.1) ... Selecting previously unselected package libjs-jquery-metadata. Preparing to unpack .../251-libjs-jquery-metadata_12-3_all.deb ... Unpacking libjs-jquery-metadata (12-3) ... Selecting previously unselected package libjs-jquery-tablesorter. Preparing to unpack .../252-libjs-jquery-tablesorter_1%3a2.31.3+dfsg1-1_all.deb ... Unpacking libjs-jquery-tablesorter (1:2.31.3+dfsg1-1) ... Selecting previously unselected package libjs-jquery-throttle-debounce. Preparing to unpack .../253-libjs-jquery-throttle-debounce_1.1+dfsg.1-1.1_all.deb ... Unpacking libjs-jquery-throttle-debounce (1.1+dfsg.1-1.1) ... Selecting previously unselected package liblbfgsb0:armhf. Preparing to unpack .../254-liblbfgsb0_3.0+dfsg.3-9_armhf.deb ... Unpacking liblbfgsb0:armhf (3.0+dfsg.3-9) ... Selecting previously unselected package liblept5:armhf. Preparing to unpack .../255-liblept5_1.79.0-1.1_armhf.deb ... Unpacking liblept5:armhf (1.79.0-1.1) ... Selecting previously unselected package libleveldb1d:armhf. Preparing to unpack .../256-libleveldb1d_1.22-3_armhf.deb ... Unpacking libleveldb1d:armhf (1.22-3) ... Selecting previously unselected package liblmdb0:armhf. Preparing to unpack .../257-liblmdb0_0.9.24-1_armhf.deb ... Unpacking liblmdb0:armhf (0.9.24-1) ... Selecting previously unselected package libprotobuf23:armhf. Preparing to unpack .../258-libprotobuf23_3.12.4-1_armhf.deb ... Unpacking libprotobuf23:armhf (3.12.4-1) ... Selecting previously unselected package libonnx1:armhf. Preparing to unpack .../259-libonnx1_1.7.0+dfsg-3_armhf.deb ... Unpacking libonnx1:armhf (1.7.0+dfsg-3) ... Selecting previously unselected package libtbb2:armhf. Preparing to unpack .../260-libtbb2_2020.3-1_armhf.deb ... Unpacking libtbb2:armhf (2020.3-1) ... Selecting previously unselected package libopencv-core4.5:armhf. Preparing to unpack .../261-libopencv-core4.5_4.5.1+dfsg-5_armhf.deb ... Unpacking libopencv-core4.5:armhf (4.5.1+dfsg-5) ... Selecting previously unselected package libopencv-flann4.5:armhf. Preparing to unpack .../262-libopencv-flann4.5_4.5.1+dfsg-5_armhf.deb ... Unpacking libopencv-flann4.5:armhf (4.5.1+dfsg-5) ... Selecting previously unselected package libopencv-imgproc4.5:armhf. Preparing to unpack .../263-libopencv-imgproc4.5_4.5.1+dfsg-5_armhf.deb ... Unpacking libopencv-imgproc4.5:armhf (4.5.1+dfsg-5) ... Selecting previously unselected package libopencv-features2d4.5:armhf. Preparing to unpack .../264-libopencv-features2d4.5_4.5.1+dfsg-5_armhf.deb ... Unpacking libopencv-features2d4.5:armhf (4.5.1+dfsg-5) ... Selecting previously unselected package libopencv-calib3d4.5:armhf. Preparing to unpack .../265-libopencv-calib3d4.5_4.5.1+dfsg-5_armhf.deb ... Unpacking libopencv-calib3d4.5:armhf (4.5.1+dfsg-5) ... Selecting previously unselected package libopencv-dnn4.5:armhf. Preparing to unpack .../266-libopencv-dnn4.5_4.5.1+dfsg-5_armhf.deb ... Unpacking libopencv-dnn4.5:armhf (4.5.1+dfsg-5) ... Selecting previously unselected package libopenexr25:armhf. Preparing to unpack .../267-libopenexr25_2.5.4-2_armhf.deb ... Unpacking libopenexr25:armhf (2.5.4-2) ... Selecting previously unselected package libopencv-imgcodecs4.5:armhf. Preparing to unpack .../268-libopencv-imgcodecs4.5_4.5.1+dfsg-5_armhf.deb ... Unpacking libopencv-imgcodecs4.5:armhf (4.5.1+dfsg-5) ... Selecting previously unselected package libopencv-highgui4.5:armhf. Preparing to unpack .../269-libopencv-highgui4.5_4.5.1+dfsg-5_armhf.deb ... Unpacking libopencv-highgui4.5:armhf (4.5.1+dfsg-5) ... Selecting previously unselected package libopencv-ml4.5:armhf. Preparing to unpack .../270-libopencv-ml4.5_4.5.1+dfsg-5_armhf.deb ... Unpacking libopencv-ml4.5:armhf (4.5.1+dfsg-5) ... Selecting previously unselected package libopencv-objdetect4.5:armhf. Preparing to unpack .../271-libopencv-objdetect4.5_4.5.1+dfsg-5_armhf.deb ... Unpacking libopencv-objdetect4.5:armhf (4.5.1+dfsg-5) ... Selecting previously unselected package libopencv-video4.5:armhf. Preparing to unpack .../272-libopencv-video4.5_4.5.1+dfsg-5_armhf.deb ... Unpacking libopencv-video4.5:armhf (4.5.1+dfsg-5) ... Selecting previously unselected package libtesseract4:armhf. Preparing to unpack .../273-libtesseract4_4.1.1-2.1_armhf.deb ... Unpacking libtesseract4:armhf (4.1.1-2.1) ... Selecting previously unselected package libopencv-contrib4.5:armhf. Preparing to unpack .../274-libopencv-contrib4.5_4.5.1+dfsg-5_armhf.deb ... Unpacking libopencv-contrib4.5:armhf (4.5.1+dfsg-5) ... Selecting previously unselected package libswscale5:armhf. Preparing to unpack .../275-libswscale5_7%3a4.3.2-0+deb11u2_armhf.deb ... Unpacking libswscale5:armhf (7:4.3.2-0+deb11u2) ... Selecting previously unselected package libopencv-videoio4.5:armhf. Preparing to unpack .../276-libopencv-videoio4.5_4.5.1+dfsg-5_armhf.deb ... Unpacking libopencv-videoio4.5:armhf (4.5.1+dfsg-5) ... Selecting previously unselected package libsleef3:armhf. Preparing to unpack .../277-libsleef3_3.5.1-1_armhf.deb ... Unpacking libsleef3:armhf (3.5.1-1) ... Selecting previously unselected package libtorch1.7. Preparing to unpack .../278-libtorch1.7_1.7.1-7_armhf.deb ... Unpacking libtorch1.7 (1.7.1-7) ... Selecting previously unselected package libyaml-0-2:armhf. Preparing to unpack .../279-libyaml-0-2_0.2.2-1_armhf.deb ... Unpacking libyaml-0-2:armhf (0.2.2-1) ... Selecting previously unselected package python3-all. Preparing to unpack .../280-python3-all_3.9.2-3_armhf.deb ... Unpacking python3-all (3.9.2-3) ... Selecting previously unselected package python3-attr. Preparing to unpack .../281-python3-attr_20.3.0-1_all.deb ... Unpacking python3-attr (20.3.0-1) ... Selecting previously unselected package python3-certifi. Preparing to unpack .../282-python3-certifi_2020.6.20-1_all.deb ... Unpacking python3-certifi (2020.6.20-1) ... Selecting previously unselected package python3-pkg-resources. Preparing to unpack .../283-python3-pkg-resources_52.0.0-4_all.deb ... Unpacking python3-pkg-resources (52.0.0-4) ... Selecting previously unselected package python3-chardet. Preparing to unpack .../284-python3-chardet_4.0.0-1_all.deb ... Unpacking python3-chardet (4.0.0-1) ... Selecting previously unselected package python3-coverage. Preparing to unpack .../285-python3-coverage_5.1+dfsg.1-2+b2_armhf.deb ... Unpacking python3-coverage (5.1+dfsg.1-2+b2) ... Selecting previously unselected package python3-six. Preparing to unpack .../286-python3-six_1.16.0-2_all.deb ... Unpacking python3-six (1.16.0-2) ... Selecting previously unselected package python3-nose2. Preparing to unpack .../287-python3-nose2_0.9.2-1_all.deb ... Unpacking python3-nose2 (0.9.2-1) ... Selecting previously unselected package python3-cov-core. Preparing to unpack .../288-python3-cov-core_1.15.0-3_all.deb ... Unpacking python3-cov-core (1.15.0-3) ... Selecting previously unselected package python3-dateutil. Preparing to unpack .../289-python3-dateutil_2.8.1-6_all.deb ... Unpacking python3-dateutil (2.8.1-6) ... Selecting previously unselected package python3-decorator. Preparing to unpack .../290-python3-decorator_4.4.2-2_all.deb ... Unpacking python3-decorator (4.4.2-2) ... Selecting previously unselected package python3-flaky. Preparing to unpack .../291-python3-flaky_3.7.0-1_all.deb ... Unpacking python3-flaky (3.7.0-1) ... Selecting previously unselected package python3-future. Preparing to unpack .../292-python3-future_0.18.2-5_all.deb ... Unpacking python3-future (0.18.2-5) ... Selecting previously unselected package python3-idna. Preparing to unpack .../293-python3-idna_2.10-1_all.deb ... Unpacking python3-idna (2.10-1) ... Selecting previously unselected package python3-more-itertools. Preparing to unpack .../294-python3-more-itertools_4.2.0-3_all.deb ... Unpacking python3-more-itertools (4.2.0-3) ... Selecting previously unselected package python3-zipp. Preparing to unpack .../295-python3-zipp_1.0.0-3_all.deb ... Unpacking python3-zipp (1.0.0-3) ... Selecting previously unselected package python3-importlib-metadata. Preparing to unpack .../296-python3-importlib-metadata_1.6.0-2_all.deb ... Unpacking python3-importlib-metadata (1.6.0-2) ... Selecting previously unselected package python3-iniconfig. Preparing to unpack .../297-python3-iniconfig_1.1.1-1_all.deb ... Unpacking python3-iniconfig (1.1.1-1) ... Selecting previously unselected package python3-joblib. Preparing to unpack .../298-python3-joblib_0.17.0-4_all.deb ... Unpacking python3-joblib (0.17.0-4) ... Selecting previously unselected package python3-numpy. Preparing to unpack .../299-python3-numpy_1%3a1.19.5-1_armhf.deb ... Unpacking python3-numpy (1:1.19.5-1) ... Selecting previously unselected package python3-pyparsing. Preparing to unpack .../300-python3-pyparsing_2.4.7-1_all.deb ... Unpacking python3-pyparsing (2.4.7-1) ... Selecting previously unselected package python3-packaging. Preparing to unpack .../301-python3-packaging_20.9-2_all.deb ... Unpacking python3-packaging (20.9-2) ... Selecting previously unselected package python3-tz. Preparing to unpack .../302-python3-tz_2021.1-1_all.deb ... Unpacking python3-tz (2021.1-1) ... Selecting previously unselected package python3-pandas-lib:armhf. Preparing to unpack .../303-python3-pandas-lib_1.1.5+dfsg-2_armhf.deb ... Unpacking python3-pandas-lib:armhf (1.1.5+dfsg-2) ... Selecting previously unselected package python3-pandas. Preparing to unpack .../304-python3-pandas_1.1.5+dfsg-2_all.deb ... Unpacking python3-pandas (1.1.5+dfsg-2) ... Selecting previously unselected package python3-pluggy. Preparing to unpack .../305-python3-pluggy_0.13.0-6_all.deb ... Unpacking python3-pluggy (0.13.0-6) ... Selecting previously unselected package python3-py. Preparing to unpack .../306-python3-py_1.10.0-1_all.deb ... Unpacking python3-py (1.10.0-1) ... Selecting previously unselected package python3-toml. Preparing to unpack .../307-python3-toml_0.10.1-1_all.deb ... Unpacking python3-toml (0.10.1-1) ... Selecting previously unselected package python3-pytest. Preparing to unpack .../308-python3-pytest_6.0.2-2_all.deb ... Unpacking python3-pytest (6.0.2-2) ... Selecting previously unselected package python3-pytest-cov. Preparing to unpack .../309-python3-pytest-cov_2.10.1-1_all.deb ... Unpacking python3-pytest-cov (2.10.1-1) ... Selecting previously unselected package python3-urllib3. Preparing to unpack .../310-python3-urllib3_1.26.5-1~exp1_all.deb ... Unpacking python3-urllib3 (1.26.5-1~exp1) ... Selecting previously unselected package python3-requests. Preparing to unpack .../311-python3-requests_2.25.1+dfsg-2_all.deb ... Unpacking python3-requests (2.25.1+dfsg-2) ... Selecting previously unselected package python3-scipy. Preparing to unpack .../312-python3-scipy_1.6.0-2_armhf.deb ... Unpacking python3-scipy (1.6.0-2) ... Selecting previously unselected package python3-setuptools. Preparing to unpack .../313-python3-setuptools_52.0.0-4_all.deb ... Unpacking python3-setuptools (52.0.0-4) ... Selecting previously unselected package python3-sklearn-lib:armhf. Preparing to unpack .../314-python3-sklearn-lib_0.23.2-5_armhf.deb ... Unpacking python3-sklearn-lib:armhf (0.23.2-5) ... Selecting previously unselected package python3-threadpoolctl. Preparing to unpack .../315-python3-threadpoolctl_2.1.0-1_all.deb ... Unpacking python3-threadpoolctl (2.1.0-1) ... Selecting previously unselected package python3-sklearn. Preparing to unpack .../316-python3-sklearn_0.23.2-5_all.deb ... Unpacking python3-sklearn (0.23.2-5) ... Selecting previously unselected package python3-tabulate. Preparing to unpack .../317-python3-tabulate_0.8.7-0.1_all.deb ... Unpacking python3-tabulate (0.8.7-0.1) ... Selecting previously unselected package python3-typing-extensions. Preparing to unpack .../318-python3-typing-extensions_3.7.4.3-1_all.deb ... Unpacking python3-typing-extensions (3.7.4.3-1) ... Selecting previously unselected package python3-yaml. Preparing to unpack .../319-python3-yaml_5.3.1-5_armhf.deb ... Unpacking python3-yaml (5.3.1-5) ... Selecting previously unselected package python3-torch. Preparing to unpack .../320-python3-torch_1.7.1-7_armhf.deb ... Unpacking python3-torch (1.7.1-7) ... Selecting previously unselected package python3-tqdm. Preparing to unpack .../321-python3-tqdm_4.57.0-2_all.deb ... Unpacking python3-tqdm (4.57.0-2) ... Setting up libgme0:armhf (0.6.3-2) ... Setting up libssh-gcrypt-4:armhf (0.9.5-1) ... Setting up media-types (4.0.0) ... Setting up libpipeline1:armhf (1.5.3-1) ... Setting up liblmdb0:armhf (0.9.24-1) ... Setting up libgraphite2-3:armhf (1.3.14-1) ... Setting up libsrt1.4-gnutls:armhf (1.4.2-1.3) ... Setting up liblcms2-2:armhf (2.12~rc1-2) ... Setting up libpixman-1-0:armhf (0.40.0-1) ... Setting up libudfread0:armhf (1.1.1-1) ... Setting up mysql-common (5.8+1.0.7) ... update-alternatives: using /etc/mysql/my.cnf.fallback to provide /etc/mysql/my.cnf (my.cnf) in auto mode Setting up librabbitmq4:armhf (0.10.0-1) ... Setting up systemd-sysv (247.3-6) ... Setting up libxau6:armhf (1:1.0.9-1) ... Setting up libraw1394-11:armhf (2.1.2-2) ... Setting up libproxy1v5:armhf (0.4.17-1) ... Setting up libpsl5:armhf (0.21.0-1.2) ... Setting up libsodium23:armhf (1.0.18-1) ... Setting up libmpg123-0:armhf (1.26.4-1) ... Setting up libogg0:armhf (1.3.4-0.1) ... Setting up libspeex1:armhf (1.2~rc1.2-1.1) ... Setting up proj-data (7.2.1-1) ... Setting up libshine3:armhf (3.1.1-2) ... Setting up bsdextrautils (2.36.1-8) ... update-alternatives: using /usr/bin/write.ul to provide /usr/bin/write (write) in auto mode Setting up hicolor-icon-theme (0.17-2) ... Setting up libtwolame0:armhf (0.4.0-2) ... Setting up libicu67:armhf (67.1-7) ... Setting up libdatrie1:armhf (0.2.13-1) ... Setting up libmagic-mgc (1:5.39-3) ... Setting up libogdi4.1 (4.1.0+ds-5) ... Setting up libqhull8.0:armhf (2020.2-3) ... Setting up libgsm1:armhf (1.0.18-2) ... Setting up libcharls2:armhf (2.2.0+dfsg-2) ... Setting up libminizip1:armhf (1.1-8+b1) ... Setting up libsoxr0:armhf (0.1.3-4) ... Setting up libarchive-zip-perl (1.68-1) ... Setting up libyaml-0-2:armhf (0.2.2-1) ... Setting up libglib2.0-0:armhf (2.66.8-1) ... Setting up libpgm-5.3-0:armhf (5.3.128~dfsg-2) ... Setting up libaom0:armhf (1.0.0.errata1-3) ... Setting up libdebhelper-perl (13.3.4) ... Setting up libbrotli1:armhf (1.0.9-2+b2) ... Setting up libgdk-pixbuf2.0-common (2.42.2+dfsg-1) ... Setting up libnorm1:armhf (1.5.9+dfsg-2) ... Setting up libtbb2:armhf (2020.3-1) ... Setting up libnghttp2-14:armhf (1.43.0-1) ... Setting up libmagic1:armhf (1:5.39-3) ... Setting up libx265-192:armhf (3.4-2) ... Setting up libdeflate0:armhf (1.7-1) ... Setting up gettext-base (0.21-4) ... Setting up xkb-data (2.29-2) ... Setting up libilmbase25:armhf (2.5.4-1) ... Setting up libprotobuf23:armhf (3.12.4-1) ... Setting up file (1:5.39-3) ... Setting up libjs-jquery-throttle-debounce (1.1+dfsg.1-1.1) ... Setting up libsleef3:armhf (3.5.1-1) ... Setting up libxvidcore4:armhf (2:1.3.7-1) ... Setting up libepsilon1:armhf (0.9.2+dfsg-5) ... Setting up libunwind8:armhf (1.3.2-2) ... Setting up libx264-160:armhf (2:0.160.3011+gitcde9a93-2.1) ... Setting up libjbig0:armhf (2.1-3.1+b2) ... Setting up libaec0:armhf (1.0.4-1) ... Setting up libcolord2:armhf (1.4.5-3) ... Setting up gdal-data (3.2.2+dfsg-2) ... Setting up libsnappy1v5:armhf (1.1.8-1) ... Setting up libsasl2-modules-db:armhf (2.1.27+dfsg-2.1) ... Setting up libcap2-bin (1:2.44-1) ... Setting up libopenexr25:armhf (2.5.4-2) ... Setting up libdconf1:armhf (0.38.0-2) ... Setting up mariadb-common (1:10.5.11-1) ... update-alternatives: using /etc/mysql/mariadb.cnf to provide /etc/mysql/my.cnf (my.cnf) in auto mode Setting up autotools-dev (20180224.1+nmu1) ... Setting up libblas3:armhf (3.9.0-3) ... update-alternatives: using /usr/lib/arm-linux-gnueabihf/blas/libblas.so.3 to provide /usr/lib/arm-linux-gnueabihf/libblas.so.3 (libblas.so.3-arm-linux-gnueabihf) in auto mode Setting up libjpeg62-turbo:armhf (1:2.0.6-4) ... Setting up libva2:armhf (2.10.0-1) ... Setting up libx11-data (2:1.7.2-1) ... Setting up libepoxy0:armhf (1.5.5-1) ... Setting up libnspr4:armhf (2:4.29-1) ... Setting up librtmp1:armhf (2.4+20151223.gitfa8646d.1-2+b2) ... Setting up libcodec2-0.9:armhf (0.9.2-4) ... Setting up libavahi-common-data:armhf (0.8-5) ... Setting up libncurses6:armhf (6.2+20201114-2) ... Setting up libdbus-1-3:armhf (1.12.20-2) ... Setting up dbus (1.12.20-2) ... Running in chroot, ignoring request. invoke-rc.d: policy-rc.d denied execution of start. Setting up libsigsegv2:armhf (2.13-1) ... Setting up libfribidi0:armhf (1.0.8-2) ... Setting up libopus0:armhf (1.3.1-0.1) ... Setting up libexif12:armhf (0.6.22-3) ... Setting up libpng16-16:armhf (1.6.37-3) ... Setting up libvorbis0a:armhf (1.3.7-1) ... Setting up liborc-0.4-0:armhf (1:0.4.32-1) ... Setting up autopoint (0.21-4) ... Setting up libwebp6:armhf (0.6.1-2.1) ... Setting up libmariadb3:armhf (1:10.5.11-1) ... Setting up fonts-dejavu-core (2.37-2) ... Setting up libsocket++1:armhf (1.12.13-11) ... Setting up libltdl7:armhf (2.4.6-15) ... Setting up libsasl2-2:armhf (2.1.27+dfsg-2.1) ... Setting up libgfortran5:armhf (10.2.1-6) ... Setting up libcpuinfo0:armhf (0.0~git20200612.63b2545-2) ... Setting up libhdf4-0-alt (4.2.15-3) ... Setting up libonnx1:armhf (1.7.0+dfsg-3) ... Setting up libgif7:armhf (5.1.9-2) ... Setting up libatk1.0-data (2.36.0-2) ... Setting up liburiparser1:armhf (0.9.4+dfsg-1) ... Setting up libmd0:armhf (1.0.3-3) ... Setting up libfreexl1:armhf (1.0.6-1) ... Setting up sensible-utils (0.0.14) ... Setting up ocl-icd-libopencl1:armhf (2.2.14-2) ... Setting up libvpx6:armhf (1.9.0-1) ... Setting up libwavpack1:armhf (5.4.0-1) ... Setting up libgeos-3.9.0:armhf (3.9.0-1) ... Setting up libfyba0:armhf (4.1.1-7) ... Setting up libuchardet0:armhf (0.0.7-1) ... Setting up libkmlbase1:armhf (1.3.0-9) ... Setting up libmpdec3:armhf (2.5.1-1) ... Setting up libfmt7:armhf (7.1.3+ds1-5) ... Setting up libpam-systemd:armhf (247.3-6) ... Setting up libdav1d4:armhf (0.7.1-3) ... Setting up libopenjp2-7:armhf (2.4.0-3) ... Setting up libsub-override-perl (0.09-2) ... Setting up libthai-data (0.1.28-3) ... Setting up libssh2-1:armhf (1.9.0-2) ... Setting up libjson-glib-1.0-common (1.6.2-1) ... Setting up libatk1.0-0:armhf (2.36.0-2) ... Setting up libtiff5:armhf (4.2.0-1) ... Setting up libwayland-egl1:armhf (1.18.0-2~exp1.1) ... Setting up libusb-1.0-0:armhf (2:1.0.24-3) ... Setting up libgphoto2-port12:armhf (2.5.27-1) ... Setting up libjs-jquery (3.5.1+dfsg+~3.5.5-7) ... Setting up glib-networking-common (2.66.0-2) ... Setting up libjs-jquery-hotkeys (0~20130707+git2d51e3a9+dfsg-2.1) ... Setting up libde265-0:armhf (1.0.8-1) ... Setting up openssl (1.1.1k-1) ... Setting up libwebpmux3:armhf (0.6.1-2.1) ... Setting up libbsd0:armhf (0.11.3-1) ... Setting up libdrm-common (2.4.104-1) ... Setting up libelf1:armhf (0.183-1) ... Setting up readline-common (8.1-1) ... Setting up libxml2:armhf (2.9.10+dfsg-6.7) ... Setting up iso-codes (4.6.0-1) ... Setting up libzvbi-common (0.2.35-18) ... Setting up libprocps8:armhf (2:3.3.17-5) ... Setting up libmp3lame0:armhf (3.100-3) ... Setting up libsz2:armhf (1.0.4-1) ... Setting up libvorbisenc2:armhf (1.3.7-1) ... Setting up libgflags2.2 (2.2.2-2) ... Setting up libxkbcommon0:armhf (1.0.3-2) ... Setting up libkmldom1:armhf (1.3.0-9) ... Setting up libwayland-client0:armhf (1.18.0-2~exp1.1) ... Setting up libfile-stripnondeterminism-perl (1.12.0-1) ... Setting up glib-networking-services (2.66.0-2) ... Setting up libzvbi0:armhf (0.2.35-18) ... Setting up libleveldb1d:armhf (1.22-3) ... Setting up libdw1:armhf (0.183-1) ... Setting up libxdmcp6:armhf (1:1.1.2-3) ... Setting up liblapack3:armhf (3.9.0-3) ... update-alternatives: using /usr/lib/arm-linux-gnueabihf/lapack/liblapack.so.3 to provide /usr/lib/arm-linux-gnueabihf/liblapack.so.3 (liblapack.so.3-arm-linux-gnueabihf) in auto mode Setting up libxcb1:armhf (1.14-3) ... Setting up gettext (0.21-4) ... Setting up libzmq5:armhf (4.3.4-1) ... Setting up libkmlengine1:armhf (1.3.0-9) ... Setting up libopencv-core4.5:armhf (4.5.1+dfsg-5) ... Setting up libtool (2.4.6-15) ... Setting up libarchive13:armhf (3.4.3-2+b1) ... Setting up libxcb-render0:armhf (1.14-3) ... Setting up libreadline8:armhf (8.1-1) ... Setting up libheif1:armhf (1.11.0-1) ... Setting up libarpack2:armhf (3.8.0-1) ... Setting up libavahi-common3:armhf (0.8-5) ... Setting up libopencv-imgproc4.5:armhf (4.5.1+dfsg-5) ... Setting up libsuperlu5:armhf (5.2.2+dfsg1-2) ... Setting up libldap-2.4-2:armhf (2.4.57+dfsg-3) ... Setting up m4 (1.4.18-5) ... Setting up libcurl3-gnutls:armhf (7.74.0-1.3+b1) ... Setting up libnss3:armhf (2:3.61-1) ... Setting up libxcb-shm0:armhf (1.14-3) ... Setting up liblept5:armhf (1.79.0-1.1) ... Setting up libjson-glib-1.0-0:armhf (1.6.2-1) ... Setting up libdap27:armhf (3.20.7-6) ... Setting up intltool-debian (0.35.0+20060710.5) ... Setting up libcfitsio9:armhf (3.490-3) ... Setting up libthai0:armhf (0.1.28-3) ... Setting up ca-certificates (20210119) ... Updating certificates in /etc/ssl/certs... 129 added, 0 removed; done. Setting up libvorbisfile3:armhf (1.3.7-1) ... Setting up dbus-user-session (1.12.20-2) ... Setting up libfreetype6:armhf (2.10.4+dfsg-1) ... Setting up libopencv-flann4.5:armhf (4.5.1+dfsg-5) ... Setting up libjs-jquery-metadata (12-3) ... Setting up libgdcm3.0:armhf (3.0.8-2) ... Setting up shared-mime-info (2.0-1) ... Setting up libdc1394-25:armhf (2.2.6-3) ... Setting up libjs-jquery-isonscreen (1.2.0-1.1) ... Setting up libgeos-c1v5:armhf (3.9.0-1) ... Setting up libopencv-dnn4.5:armhf (4.5.1+dfsg-5) ... Setting up libodbc1:armhf (2.3.6-0.1+b1) ... Setting up ucf (3.0043) ... Setting up libtesseract4:armhf (4.1.1-2.1) ... Setting up autoconf (2.69-14) ... Setting up dh-strip-nondeterminism (1.12.0-1) ... Setting up libdrm2:armhf (2.4.104-1) ... Setting up libhdf5-103-1:armhf (1.10.6+repack-4) ... Setting up dwz (0.13+20210201-1) ... Setting up librttopo1:armhf (1.1.0-2) ... Setting up libjs-jquery-tablesorter (1:2.31.3+dfsg1-1) ... Setting up libva-drm2:armhf (2.10.0-1) ... Setting up groff-base (1.22.4-6) ... Setting up libopencv-ml4.5:armhf (4.5.1+dfsg-5) ... Setting up libwayland-cursor0:armhf (1.18.0-2~exp1.1) ... Setting up procps (2:3.3.17-5) ... Setting up libdapclient6v5:armhf (3.20.7-6) ... Setting up libcurl4:armhf (7.74.0-1.3+b1) ... Setting up libx11-6:armhf (2:1.7.2-1) ... Setting up libharfbuzz0b:armhf (2.7.4-1) ... Setting up libgdk-pixbuf-2.0-0:armhf (2.42.2+dfsg-1) ... Setting up libproj19:armhf (7.2.1-1) ... Setting up libxcomposite1:armhf (1:0.4.5-1) ... Setting up libgoogle-glog0v5 (0.4.0-4) ... Setting up libopenmpt0:armhf (0.4.11-1) ... Setting up libavahi-client3:armhf (0.8-5) ... Setting up libgstreamer1.0-0:armhf (1.18.4-2.1) ... Setcap worked! gst-ptp-helper is not suid! Setting up libpython3.9-stdlib:armhf (3.9.2-1) ... Setting up libpython3-stdlib:armhf (3.9.2-3) ... Setting up liblbfgsb0:armhf (3.0+dfsg.3-9) ... Setting up libhdf5-hl-100:armhf (1.10.6+repack-4) ... Setting up automake (1:1.16.3-2) ... update-alternatives: using /usr/bin/automake-1.16 to provide /usr/bin/automake (automake) in auto mode Setting up libspatialite7:armhf (5.0.1-2) ... Setting up gtk-update-icon-cache (3.24.24-4) ... Setting up libarmadillo10 (1:10.1.2+dfsg-6) ... Setting up libxdamage1:armhf (1:1.1.5-2) ... Setting up libxerces-c3.2:armhf (3.2.3+debian-3) ... Setting up libxpm4:armhf (1:3.5.12-1) ... Setting up libxrender1:armhf (1:0.9.10-1) ... Setting up fontconfig-config (2.13.1-4.2) ... Setting up po-debconf (1.0.21+nmu1) ... Setting up libpq5:armhf (13.3-1) ... Setting up libxext6:armhf (2:1.3.3-1.1) ... Setting up libopencv-features2d4.5:armhf (4.5.1+dfsg-5) ... Setting up libgstreamer-plugins-base1.0-0:armhf (1.18.4-2) ... Setting up dconf-service (0.38.0-2) ... Setting up libatspi2.0-0:armhf (2.38.0-4) ... Setting up man-db (2.9.4-2) ... Not building database; man-db/auto-update is not 'true'. Created symlink /etc/systemd/system/timers.target.wants/man-db.timer -> /lib/systemd/system/man-db.timer. Setting up libgeotiff5:armhf (1.6.0-1) ... Setting up dh-autoreconf (20) ... Setting up libatk-bridge2.0-0:armhf (2.38.0-1) ... Setting up adwaita-icon-theme (3.38.0-1) ... update-alternatives: using /usr/share/icons/Adwaita/cursor.theme to provide /usr/share/icons/default/index.theme (x-cursor-theme) in auto mode Setting up libxfixes3:armhf (1:5.0.3-2) ... Setting up libxinerama1:armhf (2:1.1.4-2) ... Setting up libxrandr2:armhf (2:1.5.1-1) ... Setting up libcups2:armhf (2.3.3op2-3+deb11u1) ... Setting up libvdpau1:armhf (1.4-3) ... Setting up libnetcdf18:armhf (1:4.7.4-1) ... Setting up libfontconfig1:armhf (2.13.1-4.2) ... Setting up libbluray2:armhf (1:1.2.1-4) ... Setting up libva-x11-2:armhf (2.10.0-1) ... Setting up python3.9 (3.9.2-1) ... Setting up fontconfig (2.13.1-4.2) ... Regenerating fonts cache... done. Setting up libopencv-calib3d4.5:armhf (4.5.1+dfsg-5) ... Setting up libxi6:armhf (2:1.7.10-1) ... Setting up dconf-gsettings-backend:armhf (0.38.0-2) ... Setting up libxcursor1:armhf (1:1.2.0-2) ... Setting up libpango-1.0-0:armhf (1.46.2-3) ... Setting up debhelper (13.3.4) ... Setting up python3 (3.9.2-3) ... Setting up libcairo2:armhf (1.16.0-5) ... Setting up python3-tz (2021.1-1) ... Setting up libavutil56:armhf (7:4.3.2-0+deb11u2) ... Setting up python3-six (1.16.0-2) ... Setting up libopencv-video4.5:armhf (4.5.1+dfsg-5) ... Setting up python3-decorator (4.4.2-2) ... Setting up python3-flaky (3.7.0-1) ... Setting up python3-pyparsing (2.4.7-1) ... Setting up python3-certifi (2020.6.20-1) ... Setting up libgd3:armhf (2.3.0-2) ... Setting up python3-idna (2.10-1) ... Setting up python3-typing-extensions (3.7.4.3-1) ... Setting up python3-toml (0.10.1-1) ... Setting up python3-urllib3 (1.26.5-1~exp1) ... Setting up libpoppler102:armhf (20.09.0-3.1) ... Setting up libopencv-objdetect4.5:armhf (4.5.1+dfsg-5) ... Setting up libtheora0:armhf (1.1.1+dfsg.1-15) ... Setting up python3-dateutil (2.8.1-6) ... Setting up libswscale5:armhf (7:4.3.2-0+deb11u2) ... Setting up libcairo-gobject2:armhf (1.16.0-5) ... Setting up libpangoft2-1.0-0:armhf (1.46.2-3) ... Setting up python3-lib2to3 (3.9.2-1) ... Setting up libgtk-3-common (3.24.24-4) ... Setting up libpangocairo-1.0-0:armhf (1.46.2-3) ... Setting up python3-pkg-resources (52.0.0-4) ... Setting up python3-distutils (3.9.2-1) ... Setting up dh-python (4.20201102+nmu1) ... Setting up gsettings-desktop-schemas (3.38.0-2) ... Setting up python3-more-itertools (4.2.0-3) ... Setting up python3-iniconfig (1.1.1-1) ... Setting up python3-attr (20.3.0-1) ... Setting up python3-setuptools (52.0.0-4) ... Setting up python3-py (1.10.0-1) ... Setting up python3-joblib (0.17.0-4) ... Setting up python3-tqdm (4.57.0-2) ... Setting up python3-threadpoolctl (2.1.0-1) ... Setting up python3-tabulate (0.8.7-0.1) ... Setting up python3-all (3.9.2-3) ... Setting up python3-coverage (5.1+dfsg.1-2+b2) ... Setting up python3-yaml (5.3.1-5) ... Setting up python3-nose2 (0.9.2-1) ... Setting up libswresample3:armhf (7:4.3.2-0+deb11u2) ... Setting up python3-zipp (1.0.0-3) ... Setting up librsvg2-2:armhf (2.50.3+dfsg-1) ... Setting up libgphoto2-6:armhf (2.5.27-1) ... Setting up python3-packaging (20.9-2) ... Setting up python3-chardet (4.0.0-1) ... Setting up python3-requests (2.25.1+dfsg-2) ... Setting up python3-numpy (1:1.19.5-1) ... Setting up libavcodec58:armhf (7:4.3.2-0+deb11u2) ... Setting up glib-networking:armhf (2.66.0-2) ... Setting up python3-future (0.18.2-5) ... update-alternatives: using /usr/bin/python3-futurize to provide /usr/bin/futurize (futurize) in auto mode update-alternatives: using /usr/bin/python3-pasteurize to provide /usr/bin/pasteurize (pasteurize) in auto mode Setting up libchromaprint1:armhf (1.5.0-2) ... Setting up python3-scipy (1.6.0-2) ... Setting up libsoup2.4-1:armhf (2.72.0-2) ... Setting up python3-importlib-metadata (1.6.0-2) ... Setting up python3-cov-core (1.15.0-3) ... Setting up libavformat58:armhf (7:4.3.2-0+deb11u2) ... Setting up python3-pandas-lib:armhf (1.1.5+dfsg-2) ... Setting up python3-sklearn-lib:armhf (0.23.2-5) ... Setting up python3-pandas (1.1.5+dfsg-2) ... Setting up python3-sklearn (0.23.2-5) ... Setting up python3-pluggy (0.13.0-6) ... Setting up libsoup-gnome2.4-1:armhf (2.72.0-2) ... Setting up librest-0.7-0:armhf (0.8.1-1.1) ... Setting up libgtk-3-0:armhf (3.24.24-4) ... Setting up python3-pytest (6.0.2-2) ... Setting up python3-pytest-cov (2.10.1-1) ... Setting up odbcinst (2.3.6-0.1+b1) ... Setting up odbcinst1debian2:armhf (2.3.6-0.1+b1) ... Setting up libgdal28 (3.2.2+dfsg-2) ... Setting up libopencv-imgcodecs4.5:armhf (4.5.1+dfsg-5) ... Setting up libopencv-highgui4.5:armhf (4.5.1+dfsg-5) ... Setting up libopencv-videoio4.5:armhf (4.5.1+dfsg-5) ... Setting up libopencv-contrib4.5:armhf (4.5.1+dfsg-5) ... Setting up libtorch1.7 (1.7.1-7) ... Setting up python3-torch (1.7.1-7) ... Processing triggers for libc-bin (2.31-13) ... Processing triggers for ca-certificates (20210119) ... Updating certificates in /etc/ssl/certs... 0 added, 0 removed; done. Running hooks in /etc/ca-certificates/update.d... done. Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... Building tag database... -> Finished parsing the build-deps Reading package lists... Building dependency tree... Reading state information... fakeroot is already the newest version (1.25.3-1.1). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. I: Building the package I: Running cd /build/skorch-0.9.0/ && env PATH="/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" HOME="/nonexistent/first-build" dpkg-buildpackage -us -uc -b && env PATH="/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" HOME="/nonexistent/first-build" dpkg-genchanges -S > ../skorch_0.9.0-3_source.changes dpkg-buildpackage: info: source package skorch dpkg-buildpackage: info: source version 0.9.0-3 dpkg-buildpackage: info: source distribution unstable dpkg-buildpackage: info: source changed by Mo Zhou dpkg-source --before-build . dpkg-buildpackage: info: host architecture armhf fakeroot debian/rules clean dh clean -Spybuild --with python3 dh_auto_clean -O-Spybuild I: pybuild base:232: python3.9 setup.py clean running clean removing '/build/skorch-0.9.0/.pybuild/cpython3_3.9/build' (and everything under it) 'build/bdist.linux-armhf' does not exist -- can't clean it 'build/scripts-3.9' does not exist -- can't clean it dh_clean -O-Spybuild debian/rules build dh build -Spybuild --with python3 dh_update_autotools_config -O-Spybuild dh_autoreconf -O-Spybuild dh_auto_configure -O-Spybuild I: pybuild base:232: python3.9 setup.py config running config dh_auto_build -O-Spybuild I: pybuild base:232: /usr/bin/python3 setup.py build running build running build_py creating /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch copying skorch/regressor.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch copying skorch/utils.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch copying skorch/setter.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch copying skorch/net.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch copying skorch/scoring.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch copying skorch/__init__.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch copying skorch/helper.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch copying skorch/exceptions.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch copying skorch/toy.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch copying skorch/history.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch copying skorch/cli.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch copying skorch/dataset.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch copying skorch/classifier.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch creating /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests copying skorch/tests/test_classifier.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests copying skorch/tests/test_cli.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests copying skorch/tests/test_history.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests copying skorch/tests/test_scoring.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests copying skorch/tests/test_toy.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests copying skorch/tests/conftest.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests copying skorch/tests/test_regressor.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests copying skorch/tests/test_setter.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests copying skorch/tests/__init__.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests copying skorch/tests/test_net.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests copying skorch/tests/test_helper.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests copying skorch/tests/test_dataset.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests copying skorch/tests/test_utils.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests creating /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks copying skorch/callbacks/base.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks copying skorch/callbacks/scoring.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks copying skorch/callbacks/__init__.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks copying skorch/callbacks/lr_scheduler.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks copying skorch/callbacks/regularization.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks copying skorch/callbacks/training.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks copying skorch/callbacks/logging.py -> /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks running egg_info creating skorch.egg-info writing skorch.egg-info/PKG-INFO writing dependency_links to skorch.egg-info/dependency_links.txt writing requirements to skorch.egg-info/requires.txt writing top-level names to skorch.egg-info/top_level.txt writing manifest file 'skorch.egg-info/SOURCES.txt' reading manifest file 'skorch.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'skorch.egg-info/SOURCES.txt' debian/rules override_dh_auto_test make[1]: Entering directory '/build/skorch-0.9.0' dh_auto_test I: pybuild base:232: cd /build/skorch-0.9.0/.pybuild/cpython3_3.9/build; python3.9 -m pytest -v ============================= test session starts ============================== platform linux -- Python 3.9.2, pytest-6.0.2, py-1.10.0, pluggy-0.13.0 -- /usr/bin/python3.9 cachedir: .pytest_cache rootdir: /build/skorch-0.9.0, configfile: setup.cfg plugins: flaky-3.7.0, cov-2.10.1 collecting ... collected 693 items skorch/tests/test_classifier.py::TestNeuralNet::test_clone ERROR [ 0%] skorch/tests/test_classifier.py::TestNeuralNet::test_predict_and_predict_proba ERROR [ 0%] skorch/tests/test_classifier.py::TestNeuralNet::test_score ERROR [ 0%] skorch/tests/test_classifier.py::TestNeuralNet::test_takes_log_with_nllloss FAILED [ 0%] skorch/tests/test_classifier.py::TestNeuralNet::test_takes_no_log_without_nllloss FAILED [ 0%] skorch/tests/test_classifier.py::TestNeuralNet::test_high_learning_rate FAILED [ 0%] skorch/tests/test_classifier.py::TestNeuralNet::test_binary_classes_set_by_default FAILED [ 1%] skorch/tests/test_classifier.py::TestNeuralNet::test_non_binary_classes_set_by_default PASSED [ 1%] skorch/tests/test_classifier.py::TestNeuralNet::test_classes_data_torch_tensor PASSED [ 1%] skorch/tests/test_classifier.py::TestNeuralNet::test_classes_with_gaps PASSED [ 1%] skorch/tests/test_classifier.py::TestNeuralNet::test_pass_classes_explicitly_overrides PASSED [ 1%] skorch/tests/test_classifier.py::TestNeuralNet::test_pass_empty_classes_raises[classes0] PASSED [ 1%] skorch/tests/test_classifier.py::TestNeuralNet::test_pass_empty_classes_raises[classes1] PASSED [ 1%] skorch/tests/test_classifier.py::TestNeuralNet::test_with_calibrated_classifier_cv ERROR [ 2%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_fit PASSED [ 2%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_clone PASSED [ 2%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_not_fitted_raises[predict] PASSED [ 2%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_not_fitted_raises[predict_proba] PASSED [ 2%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_not_fitted_raises[forward] PASSED [ 2%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_not_fitted_raises[forward_iter] PASSED [ 2%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_net_learns PASSED [ 3%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_batch_size_one PASSED [ 3%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_history_default_keys PASSED [ 3%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_predict_predict_proba[0] PASSED [ 3%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_predict_predict_proba[0.25] PASSED [ 3%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_predict_predict_proba[0.5] PASSED [ 3%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_predict_predict_proba[0.75] PASSED [ 3%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_predict_predict_proba[1] PASSED [ 4%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_score PASSED [ 4%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_fit_with_dataset_and_y_none PASSED [ 4%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_target_2d_raises PASSED [ 4%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_custom_loss_does_not_call_sigmoid PASSED [ 4%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_default_loss_does_call_sigmoid PASSED [ 4%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_with_calibrated_classifier_cv PASSED [ 4%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_grid_search_with_roc_auc PASSED [ 5%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_module_output_not_1d PASSED [ 5%] skorch/tests/test_classifier.py::TestNeuralNetBinaryClassifier::test_module_output_2d_raises PASSED [ 5%] skorch/tests/test_cli.py::TestCli::test_resolve_dotted_name[0-0] SKIPPED [ 5%] skorch/tests/test_cli.py::TestCli::test_resolve_dotted_name[1.23-1.23] SKIPPED [ 5%] skorch/tests/test_cli.py::TestCli::test_resolve_dotted_name[foo-foo] SKIPPED [ 5%] skorch/tests/test_cli.py::TestCli::test_resolve_dotted_name[math.cos-cos] SKIPPED [ 5%] skorch/tests/test_cli.py::TestCli::test_resolve_dotted_name[torch.nn-torch.nn] SKIPPED [ 6%] skorch/tests/test_cli.py::TestCli::test_resolve_dotted_name[torch.nn.ReLU-ReLU] SKIPPED [ 6%] skorch/tests/test_cli.py::TestCli::test_resolve_dotted_name_instantiated SKIPPED [ 6%] skorch/tests/test_cli.py::TestCli::test_parse_net_kwargs SKIPPED [ 6%] skorch/tests/test_cli.py::TestCli::test_yield_estimators_net SKIPPED [ 6%] skorch/tests/test_cli.py::TestCli::test_yield_estimators_pipe SKIPPED [ 6%] skorch/tests/test_cli.py::TestCli::test_replace_default[--] SKIPPED [ 6%] skorch/tests/test_cli.py::TestCli::test_replace_default[-foo-] SKIPPED [ 7%] skorch/tests/test_cli.py::TestCli::test_replace_default[bar-foo-bar] SKIPPED [ 7%] skorch/tests/test_cli.py::TestCli::test_replace_default[int (default=128)--int (default=)] SKIPPED [ 7%] skorch/tests/test_cli.py::TestCli::test_replace_default[int (default=128)-None-int (default=128)] SKIPPED [ 7%] skorch/tests/test_cli.py::TestCli::test_replace_default[int (default=128)-""-int (default="")] SKIPPED [ 7%] skorch/tests/test_cli.py::TestCli::test_replace_default[int (default=128)-128-int (default=128)] SKIPPED [ 7%] skorch/tests/test_cli.py::TestCli::test_replace_default[int (default=128)-256-int (default=256)0] SKIPPED [ 7%] skorch/tests/test_cli.py::TestCli::test_replace_default[int (default=128)-256-int (default=256)1] SKIPPED [ 8%] skorch/tests/test_cli.py::TestCli::test_replace_default[with_parens (default=(1, 2))-new_value9-with_parens (default=(3, 4))] SKIPPED [ 8%] skorch/tests/test_cli.py::TestCli::test_replace_default[int (default =128)-256-int (default =256)] SKIPPED [ 8%] skorch/tests/test_cli.py::TestCli::test_replace_default[int (default= 128)-256-int (default= 256)] SKIPPED [ 8%] skorch/tests/test_cli.py::TestCli::test_replace_default[int (default = 128)-256-int (default = 256)] SKIPPED [ 8%] skorch/tests/test_cli.py::TestCli::test_replace_default[nonlin (default = ReLU())-new_value13-nonlin (default = Hardtanh(min_val=1, max_val=2))] SKIPPED [ 8%] skorch/tests/test_cli.py::TestCli::test_replace_default[tuple (min, max), default=(0, 1)-new_value14-tuple (min, max), default=(-1, 1)] SKIPPED [ 8%] skorch/tests/test_cli.py::TestCli::test_replace_default[boolean, optional, default True-False-boolean, optional, default False] SKIPPED [ 9%] skorch/tests/test_cli.py::TestCli::test_replace_default['l1', 'l2', or 'max', optional ('l2' by default)-l1-'l1', 'l2', or 'max', optional ('l1' by default)] SKIPPED [ 9%] skorch/tests/test_cli.py::TestCli::test_replace_default["l1", "l2", or "max", optional ("l2" by default)-l1-"l1", "l2", or "max", optional ("l1" by default)] SKIPPED [ 9%] skorch/tests/test_cli.py::TestCli::test_replace_default[l1, l2, or max, optional (l2 by default)-l1-l1, l2, or max, optional (l1 by default)] SKIPPED [ 9%] skorch/tests/test_cli.py::TestCli::test_replace_default[tuple, optional ((1, 1) by default)-new_value19-tuple, optional ((2, 2) by default)] SKIPPED [ 9%] skorch/tests/test_cli.py::TestCli::test_replace_default[nonlin (ReLU() by default)-new_value20-nonlin (Tanh() by default)] SKIPPED [ 9%] skorch/tests/test_cli.py::TestCli::test_print_help_net SKIPPED [ 9%] skorch/tests/test_cli.py::TestCli::test_print_help_net_custom_defaults SKIPPED [ 10%] skorch/tests/test_cli.py::TestCli::test_print_help_pipeline SKIPPED [ 10%] skorch/tests/test_cli.py::TestCli::test_print_help_pipeline_custom_defaults SKIPPED [ 10%] skorch/tests/test_cli.py::TestCli::test_parse_args_help SKIPPED [ 10%] skorch/tests/test_cli.py::TestCli::test_parse_args_run SKIPPED [ 10%] skorch/tests/test_cli.py::TestCli::test_parse_args_net_custom_defaults SKIPPED [ 10%] skorch/tests/test_cli.py::TestCli::test_parse_args_pipe_custom_defaults SKIPPED [ 10%] skorch/tests/test_dataset.py::TestGetLen::test_valid_lengths[data0-5] PASSED [ 11%] skorch/tests/test_dataset.py::TestGetLen::test_valid_lengths[data1-3] PASSED [ 11%] skorch/tests/test_dataset.py::TestGetLen::test_valid_lengths[data2-5] PASSED [ 11%] skorch/tests/test_dataset.py::TestGetLen::test_valid_lengths[data3-5] PASSED [ 11%] skorch/tests/test_dataset.py::TestGetLen::test_valid_lengths[data4-3] PASSED [ 11%] skorch/tests/test_dataset.py::TestGetLen::test_valid_lengths[data5-5] PASSED [ 11%] skorch/tests/test_dataset.py::TestGetLen::test_valid_lengths[data6-3] PASSED [ 11%] skorch/tests/test_dataset.py::TestGetLen::test_valid_lengths[data7-5] PASSED [ 12%] skorch/tests/test_dataset.py::TestGetLen::test_valid_lengths[data8-5] PASSED [ 12%] skorch/tests/test_dataset.py::TestGetLen::test_valid_lengths[data9-5] PASSED [ 12%] skorch/tests/test_dataset.py::TestGetLen::test_valid_lengths[data10-3] PASSED [ 12%] skorch/tests/test_dataset.py::TestGetLen::test_valid_lengths[data11-3] PASSED [ 12%] skorch/tests/test_dataset.py::TestGetLen::test_valid_lengths[data12-3] PASSED [ 12%] skorch/tests/test_dataset.py::TestGetLen::test_valid_lengths[data13-3] PASSED [ 12%] skorch/tests/test_dataset.py::TestGetLen::test_valid_lengths[data14-3] PASSED [ 13%] skorch/tests/test_dataset.py::TestGetLen::test_inconsistent_lengths[data0] PASSED [ 13%] skorch/tests/test_dataset.py::TestGetLen::test_inconsistent_lengths[data1] PASSED [ 13%] skorch/tests/test_dataset.py::TestGetLen::test_inconsistent_lengths[data2] PASSED [ 13%] skorch/tests/test_dataset.py::TestGetLen::test_inconsistent_lengths[data3] PASSED [ 13%] skorch/tests/test_dataset.py::TestGetLen::test_inconsistent_lengths[data4] PASSED [ 13%] skorch/tests/test_dataset.py::TestGetLen::test_inconsistent_lengths[data5] PASSED [ 13%] skorch/tests/test_dataset.py::TestGetLen::test_inconsistent_lengths[data6] PASSED [ 14%] skorch/tests/test_dataset.py::TestGetLen::test_inconsistent_lengths[data7] PASSED [ 14%] skorch/tests/test_dataset.py::TestUsesPlaceholderY::test_dataset_uses_y_placeholder PASSED [ 14%] skorch/tests/test_dataset.py::TestUsesPlaceholderY::test_dataset_uses_non_y_placeholder PASSED [ 14%] skorch/tests/test_dataset.py::TestUsesPlaceholderY::test_custom_dataset_uses_non_y_placeholder PASSED [ 14%] skorch/tests/test_dataset.py::TestUsesPlaceholderY::test_subset_uses_placeholder_y PASSED [ 14%] skorch/tests/test_dataset.py::TestUsesPlaceholderY::test_subset_dataset_uses_non_y_placeholder PASSED [ 15%] skorch/tests/test_dataset.py::TestUsesPlaceholderY::test_subset_of_subset_uses_placeholder_y PASSED [ 15%] skorch/tests/test_dataset.py::TestUsesPlaceholderY::test_subset_of_subset_uses_non_placeholder_y PASSED [ 15%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_1d_tensor_raises_error[net_1d0] PASSED [ 15%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_1d_tensor_raises_error[net_1d1] PASSED [ 15%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_1d_tensor_raises_error[net_1d2] PASSED [ 15%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_1d_tensor_raises_error[net_1d3] PASSED [ 15%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_2d_tensor_raises_error[net_2d0] PASSED [ 16%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_2d_tensor_raises_error[net_2d1] PASSED [ 16%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_2d_tensor_raises_error[net_2d2] PASSED [ 16%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_2d_tensor_raises_error[net_2d3] PASSED [ 16%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_1d_custom_loader[net_1d_custom_loader0] PASSED [ 16%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_1d_custom_loader[net_1d_custom_loader1] PASSED [ 16%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_1d_custom_loader[net_1d_custom_loader2] PASSED [ 16%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_1d_custom_loader[net_1d_custom_loader3] PASSED [ 17%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_2d_custom_loader[net_2d_custom_loader0] PASSED [ 17%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_2d_custom_loader[net_2d_custom_loader1] PASSED [ 17%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_2d_custom_loader[net_2d_custom_loader2] PASSED [ 17%] skorch/tests/test_dataset.py::TestNetWithoutY::test_net_2d_custom_loader[net_2d_custom_loader3] PASSED [ 17%] skorch/tests/test_dataset.py::TestNetWithDict::test_fit_predict_proba FAILED [ 17%] skorch/tests/test_dataset.py::TestNetWithList::test_fit_predict_proba FAILED [ 17%] skorch/tests/test_dataset.py::TestNetWithPandas::test_fit_predict_proba FAILED [ 18%] skorch/tests/test_dataset.py::TestDataset::test_len_correct PASSED [ 18%] skorch/tests/test_dataset.py::TestDataset::test_user_defined_len PASSED [ 18%] skorch/tests/test_dataset.py::TestDataset::test_inconsistent_lengths_raises PASSED [ 18%] skorch/tests/test_dataset.py::TestDataset::test_with_numpy_array PASSED [ 18%] skorch/tests/test_dataset.py::TestDataset::test_with_torch_tensor PASSED [ 18%] skorch/tests/test_dataset.py::TestDataset::test_with_pandas_df PASSED [ 18%] skorch/tests/test_dataset.py::TestDataset::test_with_pandas_series PASSED [ 19%] skorch/tests/test_dataset.py::TestDataset::test_with_dict PASSED [ 19%] skorch/tests/test_dataset.py::TestDataset::test_with_list_of_numpy_arrays PASSED [ 19%] skorch/tests/test_dataset.py::TestDataset::test_dataloader_with_sparse_csr[1] PASSED [ 19%] skorch/tests/test_dataset.py::TestDataset::test_dataloader_with_sparse_csr[3] PASSED [ 19%] skorch/tests/test_dataset.py::TestDataset::test_dataloader_with_sparse_csr[10] PASSED [ 19%] skorch/tests/test_dataset.py::TestDataset::test_dataloader_with_sparse_csr[17] PASSED [ 19%] skorch/tests/test_dataset.py::TestTrainSplitIsUsed::test_steps_called_with_split_data PASSED [ 20%] skorch/tests/test_dataset.py::TestCVSplit::test_reproducible PASSED [ 20%] skorch/tests/test_dataset.py::TestCVSplit::test_different_kfolds[2] PASSED [ 20%] skorch/tests/test_dataset.py::TestCVSplit::test_different_kfolds[4] PASSED [ 20%] skorch/tests/test_dataset.py::TestCVSplit::test_different_kfolds[5] PASSED [ 20%] skorch/tests/test_dataset.py::TestCVSplit::test_different_kfolds[10] PASSED [ 20%] skorch/tests/test_dataset.py::TestCVSplit::test_stratified[5] PASSED [ 20%] skorch/tests/test_dataset.py::TestCVSplit::test_stratified[0.2] PASSED [ 21%] skorch/tests/test_dataset.py::TestCVSplit::test_different_fractions[0.1] PASSED [ 21%] skorch/tests/test_dataset.py::TestCVSplit::test_different_fractions[0.2] PASSED [ 21%] skorch/tests/test_dataset.py::TestCVSplit::test_different_fractions[0.5] PASSED [ 21%] skorch/tests/test_dataset.py::TestCVSplit::test_different_fractions[0.75] PASSED [ 21%] skorch/tests/test_dataset.py::TestCVSplit::test_fraction_no_y[0.1] PASSED [ 21%] skorch/tests/test_dataset.py::TestCVSplit::test_fraction_no_y[0.2] PASSED [ 21%] skorch/tests/test_dataset.py::TestCVSplit::test_fraction_no_y[0.5] PASSED [ 22%] skorch/tests/test_dataset.py::TestCVSplit::test_fraction_no_y[0.75] PASSED [ 22%] skorch/tests/test_dataset.py::TestCVSplit::test_fraction_no_classifier PASSED [ 22%] skorch/tests/test_dataset.py::TestCVSplit::test_bad_values_raise[0] PASSED [ 22%] skorch/tests/test_dataset.py::TestCVSplit::test_bad_values_raise[-0.001] PASSED [ 22%] skorch/tests/test_dataset.py::TestCVSplit::test_bad_values_raise[-0.2] PASSED [ 22%] skorch/tests/test_dataset.py::TestCVSplit::test_bad_values_raise[-3] PASSED [ 22%] skorch/tests/test_dataset.py::TestCVSplit::test_not_stratified[5] PASSED [ 23%] skorch/tests/test_dataset.py::TestCVSplit::test_not_stratified[0.2] PASSED [ 23%] skorch/tests/test_dataset.py::TestCVSplit::test_predefined_split PASSED [ 23%] skorch/tests/test_dataset.py::TestCVSplit::test_with_y_none PASSED [ 23%] skorch/tests/test_dataset.py::TestCVSplit::test_with_torch_tensors PASSED [ 23%] skorch/tests/test_dataset.py::TestCVSplit::test_with_torch_tensors_and_stratified PASSED [ 23%] skorch/tests/test_dataset.py::TestCVSplit::test_with_list_of_arrays PASSED [ 23%] skorch/tests/test_dataset.py::TestCVSplit::test_with_dict PASSED [ 24%] skorch/tests/test_dataset.py::TestCVSplit::test_with_pandas PASSED [ 24%] skorch/tests/test_dataset.py::TestCVSplit::test_y_str_val_stratified PASSED [ 24%] skorch/tests/test_dataset.py::TestCVSplit::test_y_list_of_arr_does_not_raise PASSED [ 24%] skorch/tests/test_dataset.py::TestCVSplit::test_y_list_of_arr_stratified PASSED [ 24%] skorch/tests/test_dataset.py::TestCVSplit::test_y_dict_does_not_raise PASSED [ 24%] skorch/tests/test_dataset.py::TestCVSplit::test_y_dict_stratified_raises PASSED [ 24%] skorch/tests/test_dataset.py::TestCVSplit::test_y_none_stratified[X0-5] PASSED [ 25%] skorch/tests/test_dataset.py::TestCVSplit::test_y_none_stratified[X0-0.2] PASSED [ 25%] skorch/tests/test_dataset.py::TestCVSplit::test_y_none_stratified[X1-5] PASSED [ 25%] skorch/tests/test_dataset.py::TestCVSplit::test_y_none_stratified[X1-0.2] PASSED [ 25%] skorch/tests/test_dataset.py::TestCVSplit::test_shuffle_split_reproducible_with_random_state PASSED [ 25%] skorch/tests/test_dataset.py::TestCVSplit::test_group_kfold PASSED [ 25%] skorch/tests/test_dataset.py::TestCVSplit::test_random_state_not_used_warning[args0-kwargs0-False] PASSED [ 25%] skorch/tests/test_dataset.py::TestCVSplit::test_random_state_not_used_warning[args1-kwargs1-True] PASSED [ 26%] skorch/tests/test_dataset.py::TestCVSplit::test_random_state_not_used_warning[args2-kwargs2-True] PASSED [ 26%] skorch/tests/test_dataset.py::TestCVSplit::test_random_state_not_used_warning[args3-kwargs3-False] PASSED [ 26%] skorch/tests/test_dataset.py::TestCVSplit::test_random_state_not_used_warning[args4-kwargs4-False] PASSED [ 26%] skorch/tests/test_dataset.py::TestCVSplit::test_random_state_not_used_warning[args5-kwargs5-True] PASSED [ 26%] skorch/tests/test_helper.py::TestSliceDict::test_init_inconsistent_shapes PASSED [ 26%] skorch/tests/test_helper.py::TestSliceDict::test_set_item_correct_shape[item0] PASSED [ 26%] skorch/tests/test_helper.py::TestSliceDict::test_set_item_correct_shape[item1] PASSED [ 27%] skorch/tests/test_helper.py::TestSliceDict::test_set_item_correct_shape[item2] PASSED [ 27%] skorch/tests/test_helper.py::TestSliceDict::test_set_item_correct_shape[item3] PASSED [ 27%] skorch/tests/test_helper.py::TestSliceDict::test_set_item_correct_shape[item4] PASSED [ 27%] skorch/tests/test_helper.py::TestSliceDict::test_set_item_incorrect_shape_raises[item0] PASSED [ 27%] skorch/tests/test_helper.py::TestSliceDict::test_set_item_incorrect_shape_raises[item1] PASSED [ 27%] skorch/tests/test_helper.py::TestSliceDict::test_set_item_incorrect_shape_raises[item2] PASSED [ 27%] skorch/tests/test_helper.py::TestSliceDict::test_set_item_incorrect_shape_raises[item3] PASSED [ 28%] skorch/tests/test_helper.py::TestSliceDict::test_set_item_incorrect_shape_raises[item4] PASSED [ 28%] skorch/tests/test_helper.py::TestSliceDict::test_set_item_incorrect_key_type[1] PASSED [ 28%] skorch/tests/test_helper.py::TestSliceDict::test_set_item_incorrect_key_type[1.2] PASSED [ 28%] skorch/tests/test_helper.py::TestSliceDict::test_set_item_incorrect_key_type[key2] PASSED [ 28%] skorch/tests/test_helper.py::TestSliceDict::test_set_item_incorrect_key_type[key3] PASSED [ 28%] skorch/tests/test_helper.py::TestSliceDict::test_update_incorrect_shape_raises[item0] PASSED [ 29%] skorch/tests/test_helper.py::TestSliceDict::test_update_incorrect_shape_raises[item1] PASSED [ 29%] skorch/tests/test_helper.py::TestSliceDict::test_update_incorrect_shape_raises[item2] PASSED [ 29%] skorch/tests/test_helper.py::TestSliceDict::test_update_incorrect_shape_raises[item3] PASSED [ 29%] skorch/tests/test_helper.py::TestSliceDict::test_update_incorrect_shape_raises[item4] PASSED [ 29%] skorch/tests/test_helper.py::TestSliceDict::test_set_first_item_no_shape_raises[123] PASSED [ 29%] skorch/tests/test_helper.py::TestSliceDict::test_set_first_item_no_shape_raises[hi] PASSED [ 29%] skorch/tests/test_helper.py::TestSliceDict::test_set_first_item_no_shape_raises[item2] PASSED [ 30%] skorch/tests/test_helper.py::TestSliceDict::test_len_and_shape[kwargs0-0] PASSED [ 30%] skorch/tests/test_helper.py::TestSliceDict::test_len_and_shape[kwargs1-12] PASSED [ 30%] skorch/tests/test_helper.py::TestSliceDict::test_len_and_shape[kwargs2-12] PASSED [ 30%] skorch/tests/test_helper.py::TestSliceDict::test_len_and_shape[kwargs3-10] PASSED [ 30%] skorch/tests/test_helper.py::TestSliceDict::test_get_item_str_key PASSED [ 30%] skorch/tests/test_helper.py::TestSliceDict::test_get_item_slice[sl0-expected0] PASSED [ 30%] skorch/tests/test_helper.py::TestSliceDict::test_get_item_slice[sl1-expected1] PASSED [ 31%] skorch/tests/test_helper.py::TestSliceDict::test_get_item_slice[sl2-expected2] PASSED [ 31%] skorch/tests/test_helper.py::TestSliceDict::test_get_item_slice[sl3-expected3] PASSED [ 31%] skorch/tests/test_helper.py::TestSliceDict::test_get_item_slice[sl4-expected4] PASSED [ 31%] skorch/tests/test_helper.py::TestSliceDict::test_get_item_slice[sl5-expected5] PASSED [ 31%] skorch/tests/test_helper.py::TestSliceDict::test_slice_list PASSED [ 31%] skorch/tests/test_helper.py::TestSliceDict::test_slice_mask PASSED [ 31%] skorch/tests/test_helper.py::TestSliceDict::test_slice_int PASSED [ 32%] skorch/tests/test_helper.py::TestSliceDict::test_len_sliced PASSED [ 32%] skorch/tests/test_helper.py::TestSliceDict::test_str_repr PASSED [ 32%] skorch/tests/test_helper.py::TestSliceDict::test_iter_over_keys PASSED [ 32%] skorch/tests/test_helper.py::TestSliceDict::test_grid_search_with_dict_works FAILED [ 32%] skorch/tests/test_helper.py::TestSliceDict::test_copy PASSED [ 32%] skorch/tests/test_helper.py::TestSliceDict::test_fromkeys_raises PASSED [ 32%] skorch/tests/test_helper.py::TestSliceDict::test_update PASSED [ 33%] skorch/tests/test_helper.py::TestSliceDict::test_equals_arrays PASSED [ 33%] skorch/tests/test_helper.py::TestSliceDict::test_equals_arrays_deep PASSED [ 33%] skorch/tests/test_helper.py::TestSliceDict::test_equals_tensors PASSED [ 33%] skorch/tests/test_helper.py::TestSliceDict::test_equals_tensors_deep PASSED [ 33%] skorch/tests/test_helper.py::TestSliceDict::test_equals_arrays_tensors_mixed PASSED [ 33%] skorch/tests/test_helper.py::TestSliceDict::test_equals_different_keys PASSED [ 33%] skorch/tests/test_helper.py::TestSliceDataset::test_len_and_shape PASSED [ 34%] skorch/tests/test_helper.py::TestSliceDataset::test_len_and_shape_sliced[sl0] PASSED [ 34%] skorch/tests/test_helper.py::TestSliceDataset::test_len_and_shape_sliced[sl1] PASSED [ 34%] skorch/tests/test_helper.py::TestSliceDataset::test_len_and_shape_sliced[sl2] PASSED [ 34%] skorch/tests/test_helper.py::TestSliceDataset::test_len_and_shape_sliced[sl3] PASSED [ 34%] skorch/tests/test_helper.py::TestSliceDataset::test_len_and_shape_sliced[sl4] PASSED [ 34%] skorch/tests/test_helper.py::TestSliceDataset::test_len_and_shape_sliced[sl5] PASSED [ 34%] skorch/tests/test_helper.py::TestSliceDataset::test_len_and_shape_sliced[sl6] PASSED [ 35%] skorch/tests/test_helper.py::TestSliceDataset::test_len_and_shape_sliced[sl7] PASSED [ 35%] skorch/tests/test_helper.py::TestSliceDataset::test_len_and_shape_sliced[sl8] PASSED [ 35%] skorch/tests/test_helper.py::TestSliceDataset::test_len_and_shape_sliced[sl9] PASSED [ 35%] skorch/tests/test_helper.py::TestSliceDataset::test_len_and_shape_sliced[sl10] PASSED [ 35%] skorch/tests/test_helper.py::TestSliceDataset::test_len_and_shape_sliced[sl11] PASSED [ 35%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_non_int_is_slicedataset[0] PASSED [ 35%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_non_int_is_slicedataset[1] PASSED [ 36%] skorch/tests/test_helper.py::TestSliceDataset::test_slice[0-0] PASSED [ 36%] skorch/tests/test_helper.py::TestSliceDataset::test_slice[0-1] PASSED [ 36%] skorch/tests/test_helper.py::TestSliceDataset::test_slice[55-0] PASSED [ 36%] skorch/tests/test_helper.py::TestSliceDataset::test_slice[55-1] PASSED [ 36%] skorch/tests/test_helper.py::TestSliceDataset::test_slice[-3-0] PASSED [ 36%] skorch/tests/test_helper.py::TestSliceDataset::test_slice[-3-1] PASSED [ 36%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_twice[sl00-0-0] PASSED [ 37%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_twice[sl00-0-1] PASSED [ 37%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_twice[sl01-0-0] PASSED [ 37%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_twice[sl01-0-1] PASSED [ 37%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_twice[sl02-0-0] PASSED [ 37%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_twice[sl02-0-1] PASSED [ 37%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_twice[sl03--1-0] PASSED [ 37%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_twice[sl03--1-1] PASSED [ 38%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_twice[sl04-1-0] PASSED [ 38%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_twice[sl04-1-1] PASSED [ 38%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_twice[sl05-5-0] PASSED [ 38%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_twice[sl05-5-1] PASSED [ 38%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_twice[sl06-6-0] PASSED [ 38%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_twice[sl06-6-1] PASSED [ 38%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_three_times[sl00-sl10-5-0] PASSED [ 39%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_three_times[sl00-sl10-5-1] PASSED [ 39%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_three_times[sl01-sl11-2-0] PASSED [ 39%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_three_times[sl01-sl11-2-1] PASSED [ 39%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_three_times[sl02-sl12-29-0] PASSED [ 39%] skorch/tests/test_helper.py::TestSliceDataset::test_slice_three_times[sl02-sl12-29-1] PASSED [ 39%] skorch/tests/test_helper.py::TestSliceDataset::test_explicitly_pass_indices_at_init PASSED [ 39%] skorch/tests/test_helper.py::TestSliceDataset::test_access_element_out_of_bounds PASSED [ 40%] skorch/tests/test_helper.py::TestSliceDataset::test_fit_with_slds_works FAILED [ 40%] skorch/tests/test_helper.py::TestSliceDataset::test_fit_with_slds_without_valid_works FAILED [ 40%] skorch/tests/test_helper.py::TestSliceDataset::test_grid_search_with_slds_works PASSED [ 40%] skorch/tests/test_helper.py::TestSliceDataset::test_grid_search_with_slds_and_internal_split_works FAILED [ 40%] skorch/tests/test_helper.py::TestSliceDataset::test_grid_search_with_slds_X_and_slds_y PASSED [ 40%] skorch/tests/test_helper.py::TestSliceDataset::test_index_with_2d_array_raises PASSED [ 40%] skorch/tests/test_helper.py::TestPredefinedSplit::test_pickle PASSED [ 41%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_fit_transform_defaults FAILED [ 41%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_fit_and_transform_defaults FAILED [ 41%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_fit_transform_defaults_two_categoricals FAILED [ 41%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_fit_transform_int_as_categorical FAILED [ 41%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_fit_transform_no_X FAILED [ 41%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_invalid_dtype_raises[data0] PASSED [ 41%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_invalid_dtype_raises[data1] PASSED [ 42%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_two_invalid_dtypes_raises PASSED [ 42%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_set_float_dtype[float16] PASSED [ 42%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_set_float_dtype[float32] PASSED [ 42%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_set_float_dtype[float64] PASSED [ 42%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_set_int_dtype[int16] PASSED [ 42%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_set_int_dtype[int32] PASSED [ 43%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_set_int_dtype[int64] PASSED [ 43%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_leave_float_dtype_as_in_df PASSED [ 43%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_leave_int_dtype_as_in_df PASSED [ 43%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_column_named_X_present PASSED [ 43%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_fit_and_predict_with_pipeline FAILED [ 43%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_describe_signature_default_df PASSED [ 43%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_describe_signature_non_default_df PASSED [ 44%] skorch/tests/test_helper.py::TestDataFrameTransformer::test_describe_signature_other_dtypes PASSED [ 44%] skorch/tests/test_history.py::TestHistory::test_list_initialization PASSED [ 44%] skorch/tests/test_history.py::TestHistory::test_history_length PASSED [ 44%] skorch/tests/test_history.py::TestHistory::test_history_epoch_column PASSED [ 44%] skorch/tests/test_history.py::TestHistory::test_history_epoch_two_columns PASSED [ 44%] skorch/tests/test_history.py::TestHistory::test_history_epoch_two_columns_different_order PASSED [ 44%] skorch/tests/test_history.py::TestHistory::test_history_partial_index PASSED [ 45%] skorch/tests/test_history.py::TestHistory::test_history_partial_and_full_index PASSED [ 45%] skorch/tests/test_history.py::TestHistory::test_history_partial_join_list PASSED [ 45%] skorch/tests/test_history.py::TestHistory::test_history_retrieve_single_value PASSED [ 45%] skorch/tests/test_history.py::TestHistory::test_history_retrieve_multiple_values PASSED [ 45%] skorch/tests/test_history.py::TestHistory::test_history_non_existing_values PASSED [ 45%] skorch/tests/test_history.py::TestHistory::test_history_non_existing_values_batch PASSED [ 45%] skorch/tests/test_history.py::TestHistory::test_history_mixed_slicing PASSED [ 46%] skorch/tests/test_history.py::TestHistory::test_history_partial_and_full_index_batches PASSED [ 46%] skorch/tests/test_history.py::TestHistory::test_history_partial_batches_batch_key_3rd PASSED [ 46%] skorch/tests/test_history.py::TestHistory::test_history_partial_batches_batch_key_4th PASSED [ 46%] skorch/tests/test_history.py::TestHistory::test_history_partial_singular_values PASSED [ 46%] skorch/tests/test_history.py::TestHistory::test_history_slice_beyond_batches_but_key_not_batches PASSED [ 46%] skorch/tests/test_history.py::TestHistory::test_history_with_invalid_epoch_key PASSED [ 46%] skorch/tests/test_history.py::TestHistory::test_history_too_many_indices PASSED [ 47%] skorch/tests/test_history.py::TestHistory::test_history_save_load_cycle_file_obj PASSED [ 47%] skorch/tests/test_history.py::TestHistory::test_history_save_load_cycle_file_path PASSED [ 47%] skorch/tests/test_net.py::TestNeuralNet::test_train_net_after_copy[pickle] FAILED [ 47%] skorch/tests/test_net.py::TestNeuralNet::test_train_net_after_copy[copy.deepcopy] FAILED [ 47%] skorch/tests/test_net.py::TestNeuralNet::test_net_init_one_unknown_argument PASSED [ 47%] skorch/tests/test_net.py::TestNeuralNet::test_net_init_two_unknown_arguments PASSED [ 47%] skorch/tests/test_net.py::TestNeuralNet::test_net_init_missing_dunder_in_prefix_argument[iterator_train_shuffle-iterator_train__shuffle] PASSED [ 48%] skorch/tests/test_net.py::TestNeuralNet::test_net_init_missing_dunder_in_prefix_argument[optimizer_momentum-optimizer__momentum] PASSED [ 48%] skorch/tests/test_net.py::TestNeuralNet::test_net_init_missing_dunder_in_prefix_argument[modulenum_units-module__num_units] PASSED [ 48%] skorch/tests/test_net.py::TestNeuralNet::test_net_init_missing_dunder_in_prefix_argument[criterionreduce-criterion__reduce] PASSED [ 48%] skorch/tests/test_net.py::TestNeuralNet::test_net_init_missing_dunder_in_prefix_argument[callbacks_mycb__foo-callbacks__mycb__foo] PASSED [ 48%] skorch/tests/test_net.py::TestNeuralNet::test_net_init_missing_dunder_in_2_prefix_arguments PASSED [ 48%] skorch/tests/test_net.py::TestNeuralNet::test_net_init_missing_dunder_and_unknown PASSED [ 48%] skorch/tests/test_net.py::TestNeuralNet::test_net_with_new_attribute_with_name_clash PASSED [ 49%] skorch/tests/test_net.py::TestNeuralNet::test_fit ERROR [ 49%] skorch/tests/test_net.py::TestNeuralNet::test_not_fitted_raises[predict] PASSED [ 49%] skorch/tests/test_net.py::TestNeuralNet::test_not_fitted_raises[predict_proba] PASSED [ 49%] skorch/tests/test_net.py::TestNeuralNet::test_not_fitted_raises[forward] PASSED [ 49%] skorch/tests/test_net.py::TestNeuralNet::test_not_fitted_raises[forward_iter] PASSED [ 49%] skorch/tests/test_net.py::TestNeuralNet::test_not_fitted_other_attributes PASSED [ 49%] skorch/tests/test_net.py::TestNeuralNet::test_net_learns FAILED [ 50%] skorch/tests/test_net.py::TestNeuralNet::test_forward ERROR [ 50%] skorch/tests/test_net.py::TestNeuralNet::test_forward_device_cpu ERROR [ 50%] skorch/tests/test_net.py::TestNeuralNet::test_forward_device_gpu SKIPPED [ 50%] skorch/tests/test_net.py::TestNeuralNet::test_dropout ERROR [ 50%] skorch/tests/test_net.py::TestNeuralNet::test_pickle_save_load ERROR [ 50%] skorch/tests/test_net.py::TestNeuralNet::test_pickle_load[False] SKIPPED [ 50%] skorch/tests/test_net.py::TestNeuralNet::test_device_torch_device[cpu] PASSED [ 51%] skorch/tests/test_net.py::TestNeuralNet::test_device_torch_device[cuda] SKIPPED [ 51%] skorch/tests/test_net.py::TestNeuralNet::test_pickle_save_and_load_mixed_devices[cuda-False-cpu-True] SKIPPED [ 51%] skorch/tests/test_net.py::TestNeuralNet::test_pickle_save_and_load_mixed_devices[cuda-True-cuda-False] SKIPPED [ 51%] skorch/tests/test_net.py::TestNeuralNet::test_pickle_save_and_load_mixed_devices[cpu-True-cpu-False] SKIPPED [ 51%] skorch/tests/test_net.py::TestNeuralNet::test_pickle_save_and_load_mixed_devices[cpu-False-cpu-False] SKIPPED [ 51%] skorch/tests/test_net.py::TestNeuralNet::test_pickle_save_and_load_uninitialized PASSED [ 51%] skorch/tests/test_net.py::TestNeuralNet::test_save_params_invalid_argument_name_raises ERROR [ 52%] skorch/tests/test_net.py::TestNeuralNet::test_load_params_invalid_argument_name_raises ERROR [ 52%] skorch/tests/test_net.py::TestNeuralNet::test_save_params_with_f_params_and_f_module_raises ERROR [ 52%] skorch/tests/test_net.py::TestNeuralNet::test_load_params_with_f_params_and_f_module_raises ERROR [ 52%] skorch/tests/test_net.py::TestNeuralNet::test_save_params_no_state_dict_raises ERROR [ 52%] skorch/tests/test_net.py::TestNeuralNet::test_load_params_no_state_dict_raises ERROR [ 52%] skorch/tests/test_net.py::TestNeuralNet::test_save_params_unknown_attribute_raises ERROR [ 52%] skorch/tests/test_net.py::TestNeuralNet::test_load_params_unknown_attribute_raises ERROR [ 53%] skorch/tests/test_net.py::TestNeuralNet::test_save_load_state_dict_file ERROR [ 53%] skorch/tests/test_net.py::TestNeuralNet::test_save_load_state_dict_str ERROR [ 53%] skorch/tests/test_net.py::TestNeuralNet::test_save_load_state_dict_file_with_history_optimizer_criterion ERROR [ 53%] skorch/tests/test_net.py::TestNeuralNet::test_save_load_state_dict_str_with_history_optimizer ERROR [ 53%] skorch/tests/test_net.py::TestNeuralNet::test_save_and_load_from_checkpoint[True] FAILED [ 53%] skorch/tests/test_net.py::TestNeuralNet::test_save_and_load_from_checkpoint[False] FAILED [ 53%] skorch/tests/test_net.py::TestNeuralNet::test_checkpoint_with_prefix_and_dirname FAILED [ 54%] skorch/tests/test_net.py::TestNeuralNet::test_save_and_load_from_checkpoint_formatting FAILED [ 54%] skorch/tests/test_net.py::TestNeuralNet::test_save_params_not_init_optimizer PASSED [ 54%] skorch/tests/test_net.py::TestNeuralNet::test_load_params_not_init_optimizer PASSED [ 54%] skorch/tests/test_net.py::TestNeuralNet::test_save_state_dict_not_init PASSED [ 54%] skorch/tests/test_net.py::TestNeuralNet::test_load_state_dict_not_init PASSED [ 54%] skorch/tests/test_net.py::TestNeuralNet::test_save_load_state_cuda_intercompatibility SKIPPED [ 54%] skorch/tests/test_net.py::TestNeuralNet::test_save_params_cuda_load_params_cpu_when_cuda_available SKIPPED [ 55%] skorch/tests/test_net.py::TestNeuralNet::test_load_cuda_params_to_cuda[f_params-net_cuda.pt] SKIPPED [ 55%] skorch/tests/test_net.py::TestNeuralNet::test_load_cuda_params_to_cuda[f_optimizer-optimizer_cuda.pt] SKIPPED [ 55%] skorch/tests/test_net.py::TestNeuralNet::test_load_cuda_params_to_cpu[f_params-net_cuda.pt] SKIPPED [ 55%] skorch/tests/test_net.py::TestNeuralNet::test_load_cuda_params_to_cpu[f_optimizer-optimizer_cuda.pt] SKIPPED [ 55%] skorch/tests/test_net.py::TestNeuralNet::test_save_params_with_history_file_obj ERROR [ 55%] skorch/tests/test_net.py::TestNeuralNet::test_save_params_with_history_file_path[str] ERROR [ 55%] skorch/tests/test_net.py::TestNeuralNet::test_save_params_with_history_file_path[Path] ERROR [ 56%] skorch/tests/test_net.py::TestNeuralNet::test_callback_is_called[on_train_begin-1] ERROR [ 56%] skorch/tests/test_net.py::TestNeuralNet::test_callback_is_called[on_train_end-1] ERROR [ 56%] skorch/tests/test_net.py::TestNeuralNet::test_callback_is_called[on_epoch_begin-10] ERROR [ 56%] skorch/tests/test_net.py::TestNeuralNet::test_callback_is_called[on_epoch_end-10] ERROR [ 56%] skorch/tests/test_net.py::TestNeuralNet::test_callback_is_called[on_batch_begin-90] ERROR [ 56%] skorch/tests/test_net.py::TestNeuralNet::test_callback_is_called[on_batch_end-90] ERROR [ 56%] skorch/tests/test_net.py::TestNeuralNet::test_history_correct_shape ERROR [ 57%] skorch/tests/test_net.py::TestNeuralNet::test_history_default_keys ERROR [ 57%] skorch/tests/test_net.py::TestNeuralNet::test_history_is_filled ERROR [ 57%] skorch/tests/test_net.py::TestNeuralNet::test_set_params_works FAILED [ 57%] skorch/tests/test_net.py::TestNeuralNet::test_set_params_then_initialize_remembers_param PASSED [ 57%] skorch/tests/test_net.py::TestNeuralNet::test_set_params_on_callback_then_initialize_remembers_param PASSED [ 57%] skorch/tests/test_net.py::TestNeuralNet::test_changing_model_reinitializes_optimizer FAILED [ 58%] skorch/tests/test_net.py::TestNeuralNet::test_setting_optimizer_needs_model PASSED [ 58%] skorch/tests/test_net.py::TestNeuralNet::test_setting_lr_after_init_reflected_in_optimizer PASSED [ 58%] skorch/tests/test_net.py::TestNeuralNet::test_reinitializing_module_optimizer_message[kwargs0-] PASSED [ 58%] skorch/tests/test_net.py::TestNeuralNet::test_reinitializing_module_optimizer_message[kwargs1-] PASSED [ 58%] skorch/tests/test_net.py::TestNeuralNet::test_reinitializing_module_optimizer_message[kwargs2-Re-initializing module because the following parameters were re-set: hidden_units, input_units.\nRe-initializing optimizer.] PASSED [ 58%] skorch/tests/test_net.py::TestNeuralNet::test_reinitializing_module_optimizer_message[kwargs3-Re-initializing module because the following parameters were re-set: hidden_units, input_units.\nRe-initializing optimizer.] PASSED [ 58%] skorch/tests/test_net.py::TestNeuralNet::test_reinitializing_module_optimizer_no_message[kwargs0] PASSED [ 59%] skorch/tests/test_net.py::TestNeuralNet::test_reinitializing_module_optimizer_no_message[kwargs1] PASSED [ 59%] skorch/tests/test_net.py::TestNeuralNet::test_reinitializing_module_optimizer_no_message[kwargs2] PASSED [ 59%] skorch/tests/test_net.py::TestNeuralNet::test_reinitializing_module_optimizer_no_message[kwargs3] PASSED [ 59%] skorch/tests/test_net.py::TestNeuralNet::test_reinitializing_module_optimizer_no_message[kwargs4] PASSED [ 59%] skorch/tests/test_net.py::TestNeuralNet::test_optimizer_param_groups PASSED [ 59%] skorch/tests/test_net.py::TestNeuralNet::test_module_params_in_init FAILED [ 59%] skorch/tests/test_net.py::TestNeuralNet::test_module_initialized_with_partial_module PASSED [ 60%] skorch/tests/test_net.py::TestNeuralNet::test_criterion_init_with_params PASSED [ 60%] skorch/tests/test_net.py::TestNeuralNet::test_criterion_set_params PASSED [ 60%] skorch/tests/test_net.py::TestNeuralNet::test_criterion_non_module PASSED [ 60%] skorch/tests/test_net.py::TestNeuralNet::test_criterion_params_on_device[cpu] SKIPPED [ 60%] skorch/tests/test_net.py::TestNeuralNet::test_criterion_params_on_device[cuda] SKIPPED [ 60%] skorch/tests/test_net.py::TestNeuralNet::test_callback_with_name_init_with_params PASSED [ 60%] skorch/tests/test_net.py::TestNeuralNet::test_callback_set_params PASSED [ 61%] skorch/tests/test_net.py::TestNeuralNet::test_callback_name_collides_with_default PASSED [ 61%] skorch/tests/test_net.py::TestNeuralNet::test_callback_same_inferred_name_twice PASSED [ 61%] skorch/tests/test_net.py::TestNeuralNet::test_callback_keeps_order PASSED [ 61%] skorch/tests/test_net.py::TestNeuralNet::test_callback_custom_name_is_untouched PASSED [ 61%] skorch/tests/test_net.py::TestNeuralNet::test_callback_unique_naming_avoids_conflicts PASSED [ 61%] skorch/tests/test_net.py::TestNeuralNet::test_in_sklearn_pipeline FAILED [ 61%] skorch/tests/test_net.py::TestNeuralNet::test_grid_search_works FAILED [ 62%] skorch/tests/test_net.py::TestNeuralNet::test_change_get_loss PASSED [ 62%] skorch/tests/test_net.py::TestNeuralNet::test_net_no_valid FAILED [ 62%] skorch/tests/test_net.py::TestNeuralNet::test_use_cuda_on_model SKIPPED [ 62%] skorch/tests/test_net.py::TestNeuralNet::test_get_params_works PASSED [ 62%] skorch/tests/test_net.py::TestNeuralNet::test_get_params_with_uninit_callbacks PASSED [ 62%] skorch/tests/test_net.py::TestNeuralNet::test_get_params_no_learned_params ERROR [ 62%] skorch/tests/test_net.py::TestNeuralNet::test_clone_results_in_uninitialized_net ERROR [ 63%] skorch/tests/test_net.py::TestNeuralNet::test_clone_copies_parameters PASSED [ 63%] skorch/tests/test_net.py::TestNeuralNet::test_with_initialized_module FAILED [ 63%] skorch/tests/test_net.py::TestNeuralNet::test_with_initialized_module_other_params FAILED [ 63%] skorch/tests/test_net.py::TestNeuralNet::test_with_initialized_module_non_default FAILED [ 63%] skorch/tests/test_net.py::TestNeuralNet::test_message_fit_with_initialized_net FAILED [ 63%] skorch/tests/test_net.py::TestNeuralNet::test_with_initialized_module_partial_fit PASSED [ 63%] skorch/tests/test_net.py::TestNeuralNet::test_with_initialized_module_warm_start PASSED [ 64%] skorch/tests/test_net.py::TestNeuralNet::test_with_initialized_sequential FAILED [ 64%] skorch/tests/test_net.py::TestNeuralNet::test_call_fit_twice_retrains FAILED [ 64%] skorch/tests/test_net.py::TestNeuralNet::test_call_fit_twice_warmstart FAILED [ 64%] skorch/tests/test_net.py::TestNeuralNet::test_partial_fit_first_call FAILED [ 64%] skorch/tests/test_net.py::TestNeuralNet::test_call_partial_fit_after_fit FAILED [ 64%] skorch/tests/test_net.py::TestNeuralNet::test_binary_classification_with_cuda SKIPPED [ 64%] skorch/tests/test_net.py::TestNeuralNet::test_net_initialized_with_custom_dataset_args FAILED [ 65%] skorch/tests/test_net.py::TestNeuralNet::test_net_initialized_with_initalized_dataset FAILED [ 65%] skorch/tests/test_net.py::TestNeuralNet::test_net_initialized_with_partialed_dataset FAILED [ 65%] skorch/tests/test_net.py::TestNeuralNet::test_net_initialized_with_initalized_dataset_and_kwargs_raises PASSED [ 65%] skorch/tests/test_net.py::TestNeuralNet::test_repr_uninitialized_works PASSED [ 65%] skorch/tests/test_net.py::TestNeuralNet::test_repr_initialized_works PASSED [ 65%] skorch/tests/test_net.py::TestNeuralNet::test_repr_fitted_works FAILED [ 65%] skorch/tests/test_net.py::TestNeuralNet::test_fit_params_passed_to_module FAILED [ 66%] skorch/tests/test_net.py::TestNeuralNet::test_fit_params_passed_to_module_in_pipeline FAILED [ 66%] skorch/tests/test_net.py::TestNeuralNet::test_fit_params_passed_to_train_split FAILED [ 66%] skorch/tests/test_net.py::TestNeuralNet::test_data_dict_and_fit_params FAILED [ 66%] skorch/tests/test_net.py::TestNeuralNet::test_data_dict_and_fit_params_conflicting_names_raises PASSED [ 66%] skorch/tests/test_net.py::TestNeuralNet::test_fit_with_dataset FAILED [ 66%] skorch/tests/test_net.py::TestNeuralNet::test_predict_with_dataset PASSED [ 66%] skorch/tests/test_net.py::TestNeuralNet::test_fit_with_dataset_X_y_inaccessible_does_not_raise FAILED [ 67%] skorch/tests/test_net.py::TestNeuralNet::test_fit_with_dataset_without_explicit_y FAILED [ 67%] skorch/tests/test_net.py::TestNeuralNet::test_fit_with_dataset_stratified_without_explicit_y_raises PASSED [ 67%] skorch/tests/test_net.py::TestNeuralNet::test_fit_with_dataset_one_item_error PASSED [ 67%] skorch/tests/test_net.py::TestNeuralNet::test_predict_with_dataset_one_item_error PASSED [ 67%] skorch/tests/test_net.py::TestNeuralNet::test_fit_with_dataset_three_items_error PASSED [ 67%] skorch/tests/test_net.py::TestNeuralNet::test_predict_with_dataset_three_items_error PASSED [ 67%] skorch/tests/test_net.py::TestNeuralNet::test_multioutput_forward_iter PASSED [ 68%] skorch/tests/test_net.py::TestNeuralNet::test_multioutput_forward PASSED [ 68%] skorch/tests/test_net.py::TestNeuralNet::test_multioutput_forward_device_gpu SKIPPED [ 68%] skorch/tests/test_net.py::TestNeuralNet::test_multioutput_predict PASSED [ 68%] skorch/tests/test_net.py::TestNeuralNet::test_multiouput_predict_proba PASSED [ 68%] skorch/tests/test_net.py::TestNeuralNet::test_setting_callback_possible PASSED [ 68%] skorch/tests/test_net.py::TestNeuralNet::test_setting_callback_default_possible PASSED [ 68%] skorch/tests/test_net.py::TestNeuralNet::test_setting_callback_to_none_possible FAILED [ 69%] skorch/tests/test_net.py::TestNeuralNet::test_setting_callback_to_none_and_more_params_during_init_raises PASSED [ 69%] skorch/tests/test_net.py::TestNeuralNet::test_setting_callback_to_none_and_more_params_later_raises PASSED [ 69%] skorch/tests/test_net.py::TestNeuralNet::test_set_params_with_unknown_key_raises PASSED [ 69%] skorch/tests/test_net.py::TestNeuralNet::test_net_variable_prediction_lengths PASSED [ 69%] skorch/tests/test_net.py::TestNeuralNet::test_net_variable_label_lengths PASSED [ 69%] skorch/tests/test_net.py::TestNeuralNet::test_no_grad_during_validation FAILED [ 69%] skorch/tests/test_net.py::TestNeuralNet::test_callback_on_grad_computed FAILED [ 70%] skorch/tests/test_net.py::TestNeuralNet::test_no_grad_during_evaluation_unless_training[True] PASSED [ 70%] skorch/tests/test_net.py::TestNeuralNet::test_no_grad_during_evaluation_unless_training[False] PASSED [ 70%] skorch/tests/test_net.py::TestNeuralNet::test_batch_size_neg_1_uses_whole_dataset[net_kwargs0-800-200] FAILED [ 70%] skorch/tests/test_net.py::TestNeuralNet::test_batch_size_neg_1_uses_whole_dataset[net_kwargs1-800-128] FAILED [ 70%] skorch/tests/test_net.py::TestNeuralNet::test_batch_size_neg_1_uses_whole_dataset[net_kwargs2-128-200] FAILED [ 70%] skorch/tests/test_net.py::TestNeuralNet::test_batch_count[40] FAILED [ 70%] skorch/tests/test_net.py::TestNeuralNet::test_batch_count[100] FAILED [ 71%] skorch/tests/test_net.py::TestNeuralNet::test_fit_lbfgs_optimizer FAILED [ 71%] skorch/tests/test_net.py::TestNeuralNet::test_accumulator_that_returns_last_value FAILED [ 71%] skorch/tests/test_net.py::TestNeuralNet::test_predefined_split FAILED [ 71%] skorch/tests/test_net.py::TestNeuralNet::test_predefined_split_with_y FAILED [ 71%] skorch/tests/test_net.py::TestNeuralNet::test_set_lr_at_runtime_doesnt_reinitialize ERROR [ 71%] skorch/tests/test_net.py::TestNeuralNet::test_set_lr_at_runtime_sets_lr ERROR [ 72%] skorch/tests/test_net.py::TestNeuralNet::test_set_lr_at_runtime_sets_lr_via_pgroup_0 ERROR [ 72%] skorch/tests/test_net.py::TestNeuralNet::test_set_lr_at_runtime_sets_lr_pgroups FAILED [ 72%] skorch/tests/test_net.py::TestNeuralNet::test_criterion_training_set_correctly FAILED [ 72%] skorch/tests/test_net.py::TestNeuralNet::test_criterion_is_not_a_torch_module FAILED [ 72%] skorch/tests/test_net.py::TestNeuralNet::test_gradient_accumulation[1] FAILED [ 72%] skorch/tests/test_net.py::TestNeuralNet::test_gradient_accumulation[2] FAILED [ 72%] skorch/tests/test_net.py::TestNeuralNet::test_gradient_accumulation[3] FAILED [ 73%] skorch/tests/test_net.py::TestNeuralNet::test_gradient_accumulation[5] FAILED [ 73%] skorch/tests/test_net.py::TestNeuralNet::test_gradient_accumulation[10] FAILED [ 73%] skorch/tests/test_net.py::TestNeuralNet::test_setattr_module PASSED [ 73%] skorch/tests/test_net.py::TestNeuralNet::test_setattr_module_instance PASSED [ 73%] skorch/tests/test_net.py::TestNeuralNet::test_setattr_optimizer PASSED [ 73%] skorch/tests/test_net.py::TestNeuralNet::test_setattr_ending_in_underscore PASSED [ 73%] skorch/tests/test_net.py::TestNeuralNet::test_setattr_no_duplicates PASSED [ 74%] skorch/tests/test_net.py::TestNeuralNet::test_setattr_non_torch_attribute PASSED [ 74%] skorch/tests/test_net.py::TestNeuralNet::test_setattr_does_not_modify_class_attribute PASSED [ 74%] skorch/tests/test_net.py::TestNeuralNet::test_set_params_on_custom_module PASSED [ 74%] skorch/tests/test_net.py::TestNeuralNet::test_save_load_state_dict_custom_module PASSED [ 74%] skorch/tests/test_net.py::TestNeuralNet::test_passes_y_to_train_split_when_not_none[False-None-raises0] PASSED [ 74%] skorch/tests/test_net.py::TestNeuralNet::test_passes_y_to_train_split_when_not_none[True-None-raises1] PASSED [ 74%] skorch/tests/test_net.py::TestNeuralNet::test_passes_y_to_train_split_when_not_none[False-default-raises2] PASSED [ 75%] skorch/tests/test_net.py::TestNeuralNet::test_passes_y_to_train_split_when_not_none[True-default-raises3] PASSED [ 75%] skorch/tests/test_net.py::TestNeuralNet::test_passes_y_to_train_split_when_not_none[False--raises4] PASSED [ 75%] skorch/tests/test_net.py::TestNeuralNet::test_passes_y_to_train_split_when_not_none[True--raises5] PASSED [ 75%] skorch/tests/test_net.py::TestNeuralNet::test_passes_y_to_train_split_when_not_none[True--raises6] PASSED [ 75%] skorch/tests/test_net.py::TestNeuralNet::test_predict_nonlinearity_called_with_predict FAILED [ 75%] skorch/tests/test_net.py::TestNeuralNet::test_predict_nonlinearity_called_with_predict_proba FAILED [ 75%] skorch/tests/test_net.py::TestNeuralNet::test_predict_nonlinearity_none PASSED [ 76%] skorch/tests/test_net.py::TestNeuralNet::test_predict_nonlinearity_type_error PASSED [ 76%] skorch/tests/test_net.py::TestNetSparseInput::test_fit_sparse_csr_learns FAILED [ 76%] skorch/tests/test_net.py::TestNetSparseInput::test_fit_sparse_csr_learns_cuda SKIPPED [ 76%] skorch/tests/test_regressor.py::TestNeuralNetRegressor::test_clone PASSED [ 76%] skorch/tests/test_regressor.py::TestNeuralNetRegressor::test_fit PASSED [ 76%] skorch/tests/test_regressor.py::TestNeuralNetRegressor::test_not_fitted_raises[predict] PASSED [ 76%] skorch/tests/test_regressor.py::TestNeuralNetRegressor::test_not_fitted_raises[predict_proba] PASSED [ 77%] skorch/tests/test_regressor.py::TestNeuralNetRegressor::test_not_fitted_raises[forward] PASSED [ 77%] skorch/tests/test_regressor.py::TestNeuralNetRegressor::test_not_fitted_raises[forward_iter] PASSED [ 77%] skorch/tests/test_regressor.py::TestNeuralNetRegressor::test_net_learns PASSED [ 77%] skorch/tests/test_regressor.py::TestNeuralNetRegressor::test_history_default_keys PASSED [ 77%] skorch/tests/test_regressor.py::TestNeuralNetRegressor::test_target_1d_raises PASSED [ 77%] skorch/tests/test_regressor.py::TestNeuralNetRegressor::test_predict_predict_proba PASSED [ 77%] skorch/tests/test_regressor.py::TestNeuralNetRegressor::test_score PASSED [ 78%] skorch/tests/test_regressor.py::TestNeuralNetRegressor::test_multioutput_score PASSED [ 78%] skorch/tests/test_scoring.py::TestLossScoring::test_score_unfit_net_raises[mean] PASSED [ 78%] skorch/tests/test_scoring.py::TestLossScoring::test_score_unfit_scored_net_raises[mean] PASSED [ 78%] skorch/tests/test_scoring.py::TestLossScoring::test_nonnull_sample_weight_raises[mean] ERROR [ 78%] skorch/tests/test_scoring.py::TestLossScoring::test_scored_net_output_type[mean] ERROR [ 78%] skorch/tests/test_scoring.py::TestLossScoring::test_score_on_net_fit[mean] ERROR [ 78%] skorch/tests/test_scoring.py::TestLossScoring::test_scored_net_matches_criterion_value[mean] ERROR [ 79%] skorch/tests/test_scoring.py::TestLossScoring::test_scored_net_with_reduction_none[mean] FAILED [ 79%] skorch/tests/test_scoring.py::TestLossScoring::test_score_unknown_reduction_raises[mean] ERROR [ 79%] skorch/tests/test_scoring.py::TestLossScoring::test_score_unfit_net_raises[sum] PASSED [ 79%] skorch/tests/test_scoring.py::TestLossScoring::test_score_unfit_scored_net_raises[sum] PASSED [ 79%] skorch/tests/test_scoring.py::TestLossScoring::test_nonnull_sample_weight_raises[sum] ERROR [ 79%] skorch/tests/test_scoring.py::TestLossScoring::test_scored_net_output_type[sum] ERROR [ 79%] skorch/tests/test_scoring.py::TestLossScoring::test_score_on_net_fit[sum] ERROR [ 80%] skorch/tests/test_scoring.py::TestLossScoring::test_scored_net_matches_criterion_value[sum] ERROR [ 80%] skorch/tests/test_scoring.py::TestLossScoring::test_scored_net_with_reduction_none[sum] FAILED [ 80%] skorch/tests/test_scoring.py::TestLossScoring::test_score_unknown_reduction_raises[sum] ERROR [ 80%] skorch/tests/test_setter.py::TestOptimizerSetter::test_lr_attribute_is_updated PASSED [ 80%] skorch/tests/test_setter.py::TestOptimizerSetter::test_wrong_name_raises PASSED [ 80%] skorch/tests/test_setter.py::TestOptimizerSetter::test_only_specific_param_group_updated[momentum-0.1-0] PASSED [ 80%] skorch/tests/test_setter.py::TestOptimizerSetter::test_only_specific_param_group_updated[momentum-0.1-1] PASSED [ 81%] skorch/tests/test_setter.py::TestOptimizerSetter::test_only_specific_param_group_updated[lr-0.3-0] PASSED [ 81%] skorch/tests/test_setter.py::TestOptimizerSetter::test_only_specific_param_group_updated[lr-0.3-1] PASSED [ 81%] skorch/tests/test_toy.py::TestMLPModule::test_one_hidden PASSED [ 81%] skorch/tests/test_toy.py::TestMLPModule::test_two_hidden PASSED [ 81%] skorch/tests/test_toy.py::TestMLPModule::test_many_hidden[0] PASSED [ 81%] skorch/tests/test_toy.py::TestMLPModule::test_many_hidden[1] PASSED [ 81%] skorch/tests/test_toy.py::TestMLPModule::test_many_hidden[2] PASSED [ 82%] skorch/tests/test_toy.py::TestMLPModule::test_many_hidden[5] PASSED [ 82%] skorch/tests/test_toy.py::TestMLPModule::test_many_hidden[10] PASSED [ 82%] skorch/tests/test_toy.py::TestMLPModule::test_output_nonlin PASSED [ 82%] skorch/tests/test_toy.py::TestMLPModule::test_output_squeezed PASSED [ 82%] skorch/tests/test_toy.py::TestMLPModule::test_dropout PASSED [ 82%] skorch/tests/test_toy.py::TestMLPModule::test_make_classifier PASSED [ 82%] skorch/tests/test_toy.py::TestMLPModule::test_make_binary_classifier PASSED [ 83%] skorch/tests/test_toy.py::TestMLPModule::test_make_regressor PASSED [ 83%] skorch/tests/test_utils.py::TestToTensor::test_device_setting_cuda SKIPPED [ 83%] skorch/tests/test_utils.py::TestToTensor::test_tensor_conversion_cuda[X0-expected0-cpu] PASSED [ 83%] skorch/tests/test_utils.py::TestToTensor::test_tensor_conversion_cuda[X1-expected1-cpu] PASSED [ 83%] skorch/tests/test_utils.py::TestToTensor::test_tensor_conversion_cuda[X2-expected2-cpu] PASSED [ 83%] skorch/tests/test_utils.py::TestToTensor::test_tensor_conversion_cuda[X3-expected3-cpu] PASSED [ 83%] skorch/tests/test_utils.py::TestToTensor::test_tensor_conversion_cuda[X4-expected4-cpu] PASSED [ 84%] skorch/tests/test_utils.py::TestToTensor::test_tensor_conversion_cuda[X5-expected5-cpu] PASSED [ 84%] skorch/tests/test_utils.py::TestToTensor::test_tensor_conversion_cuda[X6-expected6-cpu] PASSED [ 84%] skorch/tests/test_utils.py::TestToTensor::test_tensor_conversion_cuda[X7-expected7-cpu] PASSED [ 84%] skorch/tests/test_utils.py::TestToTensor::test_sparse_tensor[cpu] PASSED [ 84%] skorch/tests/test_utils.py::TestToTensor::test_sparse_tensor[cuda] SKIPPED [ 84%] skorch/tests/test_utils.py::TestToTensor::test_sparse_tensor_not_accepted_raises[cpu] PASSED [ 84%] skorch/tests/test_utils.py::TestToTensor::test_sparse_tensor_not_accepted_raises[cuda] SKIPPED [ 85%] skorch/tests/test_utils.py::TestToNumpy::test_tensor PASSED [ 85%] skorch/tests/test_utils.py::TestToNumpy::test_list PASSED [ 85%] skorch/tests/test_utils.py::TestToNumpy::test_tuple PASSED [ 85%] skorch/tests/test_utils.py::TestToNumpy::test_dict PASSED [ 85%] skorch/tests/test_utils.py::TestToNumpy::test_invalid_inputs[1] PASSED [ 85%] skorch/tests/test_utils.py::TestToNumpy::test_invalid_inputs[x_invalid1] PASSED [ 86%] skorch/tests/test_utils.py::TestToNumpy::test_invalid_inputs[x_invalid2] PASSED [ 86%] skorch/tests/test_utils.py::TestToNumpy::test_invalid_inputs[x_invalid3] PASSED [ 86%] skorch/tests/test_utils.py::TestToDevice::test_check_device_torch_tensor[cpu-cpu] PASSED [ 86%] skorch/tests/test_utils.py::TestToDevice::test_check_device_torch_tensor[cpu-cuda] SKIPPED [ 86%] skorch/tests/test_utils.py::TestToDevice::test_check_device_torch_tensor[cuda-cpu] SKIPPED [ 86%] skorch/tests/test_utils.py::TestToDevice::test_check_device_torch_tensor[cuda-cuda] SKIPPED [ 86%] skorch/tests/test_utils.py::TestToDevice::test_check_device_torch_tensor[None-None] PASSED [ 87%] skorch/tests/test_utils.py::TestToDevice::test_check_device_tuple_torch_tensor[cpu-cpu] PASSED [ 87%] skorch/tests/test_utils.py::TestToDevice::test_check_device_tuple_torch_tensor[cpu-cuda] SKIPPED [ 87%] skorch/tests/test_utils.py::TestToDevice::test_check_device_tuple_torch_tensor[cuda-cpu] SKIPPED [ 87%] skorch/tests/test_utils.py::TestToDevice::test_check_device_tuple_torch_tensor[cuda-cuda] SKIPPED [ 87%] skorch/tests/test_utils.py::TestToDevice::test_check_device_tuple_torch_tensor[None-None] PASSED [ 87%] skorch/tests/test_utils.py::TestToDevice::test_check_device_dict_torch_tensor[cpu-cpu] PASSED [ 87%] skorch/tests/test_utils.py::TestToDevice::test_check_device_dict_torch_tensor[cpu-cuda] SKIPPED [ 88%] skorch/tests/test_utils.py::TestToDevice::test_check_device_dict_torch_tensor[cuda-cpu] SKIPPED [ 88%] skorch/tests/test_utils.py::TestToDevice::test_check_device_dict_torch_tensor[cuda-cuda] SKIPPED [ 88%] skorch/tests/test_utils.py::TestToDevice::test_check_device_dict_torch_tensor[None-None] PASSED [ 88%] skorch/tests/test_utils.py::TestToDevice::test_check_device_packed_padded_sequence[cpu-cpu] PASSED [ 88%] skorch/tests/test_utils.py::TestToDevice::test_check_device_packed_padded_sequence[cpu-cuda] SKIPPED [ 88%] skorch/tests/test_utils.py::TestToDevice::test_check_device_packed_padded_sequence[cuda-cpu] SKIPPED [ 88%] skorch/tests/test_utils.py::TestToDevice::test_check_device_packed_padded_sequence[cuda-cuda] SKIPPED [ 89%] skorch/tests/test_utils.py::TestToDevice::test_check_device_packed_padded_sequence[None-None] PASSED [ 89%] skorch/tests/test_utils.py::TestToDevice::test_nested_data[cpu-cpu] PASSED [ 89%] skorch/tests/test_utils.py::TestToDevice::test_nested_data[cpu-cuda] SKIPPED [ 89%] skorch/tests/test_utils.py::TestToDevice::test_nested_data[cuda-cpu] SKIPPED [ 89%] skorch/tests/test_utils.py::TestToDevice::test_nested_data[cuda-cuda] SKIPPED [ 89%] skorch/tests/test_utils.py::TestToDevice::test_nested_data[None-None] PASSED [ 89%] skorch/tests/test_utils.py::TestDuplicateItems::test_no_duplicates[collections0] PASSED [ 90%] skorch/tests/test_utils.py::TestDuplicateItems::test_no_duplicates[collections1] PASSED [ 90%] skorch/tests/test_utils.py::TestDuplicateItems::test_no_duplicates[collections2] PASSED [ 90%] skorch/tests/test_utils.py::TestDuplicateItems::test_no_duplicates[collections3] PASSED [ 90%] skorch/tests/test_utils.py::TestDuplicateItems::test_no_duplicates[collections4] PASSED [ 90%] skorch/tests/test_utils.py::TestDuplicateItems::test_no_duplicates[collections5] PASSED [ 90%] skorch/tests/test_utils.py::TestDuplicateItems::test_no_duplicates[collections6] PASSED [ 90%] skorch/tests/test_utils.py::TestDuplicateItems::test_no_duplicates[collections7] PASSED [ 91%] skorch/tests/test_utils.py::TestDuplicateItems::test_no_duplicates[collections8] PASSED [ 91%] skorch/tests/test_utils.py::TestDuplicateItems::test_duplicates[collections0-expected0] PASSED [ 91%] skorch/tests/test_utils.py::TestDuplicateItems::test_duplicates[collections1-expected1] PASSED [ 91%] skorch/tests/test_utils.py::TestDuplicateItems::test_duplicates[collections2-expected2] PASSED [ 91%] skorch/tests/test_utils.py::TestDuplicateItems::test_duplicates[collections3-expected3] PASSED [ 91%] skorch/tests/test_utils.py::TestDuplicateItems::test_duplicates[collections4-expected4] PASSED [ 91%] skorch/tests/test_utils.py::TestDuplicateItems::test_duplicates[collections5-expected5] PASSED [ 92%] skorch/tests/test_utils.py::TestDuplicateItems::test_duplicates[collections6-expected6] PASSED [ 92%] skorch/tests/test_utils.py::TestDuplicateItems::test_duplicates[collections7-expected7] PASSED [ 92%] skorch/tests/test_utils.py::TestParamsFor::test_params_for[p1-kwargs0-expected0] PASSED [ 92%] skorch/tests/test_utils.py::TestParamsFor::test_params_for[p2-kwargs1-expected1] PASSED [ 92%] skorch/tests/test_utils.py::TestParamsFor::test_params_for[p1-kwargs2-expected2] PASSED [ 92%] skorch/tests/test_utils.py::TestParamsFor::test_params_for[p2-kwargs3-expected3] PASSED [ 92%] skorch/tests/test_utils.py::TestDataFromDataset::test_with_skorch_ds PASSED [ 93%] skorch/tests/test_utils.py::TestDataFromDataset::test_with_subset PASSED [ 93%] skorch/tests/test_utils.py::TestDataFromDataset::test_with_subset_subset PASSED [ 93%] skorch/tests/test_utils.py::TestDataFromDataset::test_with_other_ds PASSED [ 93%] skorch/tests/test_utils.py::TestDataFromDataset::test_with_dict_data PASSED [ 93%] skorch/tests/test_utils.py::TestDataFromDataset::test_subset_with_y_none PASSED [ 93%] skorch/tests/test_utils.py::TestMultiIndexing::test_ndarray[data0-i0-expected0] PASSED [ 93%] skorch/tests/test_utils.py::TestMultiIndexing::test_ndarray[data1-2-expected1] PASSED [ 94%] skorch/tests/test_utils.py::TestMultiIndexing::test_ndarray[data2-i2-expected2] PASSED [ 94%] skorch/tests/test_utils.py::TestMultiIndexing::test_torch_tensor[data0-i0-expected0] PASSED [ 94%] skorch/tests/test_utils.py::TestMultiIndexing::test_torch_tensor[data1-2-expected1] PASSED [ 94%] skorch/tests/test_utils.py::TestMultiIndexing::test_torch_tensor[data2-i2-expected2] PASSED [ 94%] skorch/tests/test_utils.py::TestMultiIndexing::test_torch_tensor[data3-i3-expected3] PASSED [ 94%] skorch/tests/test_utils.py::TestMultiIndexing::test_list[data0-i0-expected0] PASSED [ 94%] skorch/tests/test_utils.py::TestMultiIndexing::test_list[data1-i1-expected1] PASSED [ 95%] skorch/tests/test_utils.py::TestMultiIndexing::test_list[data2-2-3] PASSED [ 95%] skorch/tests/test_utils.py::TestMultiIndexing::test_list[data3--2-3] PASSED [ 95%] skorch/tests/test_utils.py::TestMultiIndexing::test_dict_of_lists[data0-0-expected0] PASSED [ 95%] skorch/tests/test_utils.py::TestMultiIndexing::test_dict_of_lists[data1-i1-expected1] PASSED [ 95%] skorch/tests/test_utils.py::TestMultiIndexing::test_dict_of_arrays[data0-0-expected0] PASSED [ 95%] skorch/tests/test_utils.py::TestMultiIndexing::test_dict_of_arrays[data1-i1-expected1] PASSED [ 95%] skorch/tests/test_utils.py::TestMultiIndexing::test_dict_of_torch_tensors[data0-0-expected0] PASSED [ 96%] skorch/tests/test_utils.py::TestMultiIndexing::test_dict_of_torch_tensors[data1-i1-expected1] PASSED [ 96%] skorch/tests/test_utils.py::TestMultiIndexing::test_mixed_data PASSED [ 96%] skorch/tests/test_utils.py::TestMultiIndexing::test_mixed_data_slice PASSED [ 96%] skorch/tests/test_utils.py::TestMultiIndexing::test_pandas_dataframe PASSED [ 96%] skorch/tests/test_utils.py::TestMultiIndexing::test_pandas_dataframe_slice PASSED [ 96%] skorch/tests/test_utils.py::TestMultiIndexing::test_pandas_series PASSED [ 96%] skorch/tests/test_utils.py::TestMultiIndexing::test_pandas_series_slice PASSED [ 97%] skorch/tests/test_utils.py::TestMultiIndexing::test_list_of_dataframe_and_series PASSED [ 97%] skorch/tests/test_utils.py::TestMultiIndexing::test_list_of_dataframe_and_series_slice PASSED [ 97%] skorch/tests/test_utils.py::TestMultiIndexing::test_index_torch_tensor_with_numpy_int_array PASSED [ 97%] skorch/tests/test_utils.py::TestMultiIndexing::test_index_torch_tensor_with_numpy_bool_array PASSED [ 97%] skorch/tests/test_utils.py::TestMultiIndexing::test_index_with_float_array_raises PASSED [ 97%] skorch/tests/test_utils.py::TestMultiIndexing::test_boolean_index_2d PASSED [ 97%] skorch/tests/test_utils.py::TestMultiIndexing::test_boolean_index_2d_with_torch_tensor PASSED [ 98%] skorch/tests/test_utils.py::TestMultiIndexing::test_sparse_csr_matrix[data0-i0-expected0] PASSED [ 98%] skorch/tests/test_utils.py::TestMultiIndexing::test_sparse_csr_matrix[data1-2-expected1] PASSED [ 98%] skorch/tests/test_utils.py::TestMultiIndexing::test_sparse_csr_matrix[data2-i2-expected2] PASSED [ 98%] skorch/tests/test_utils.py::TestIsSkorchDataset::test_data_types[input_data0-False] PASSED [ 98%] skorch/tests/test_utils.py::TestIsSkorchDataset::test_data_types[input_data1-False] PASSED [ 98%] skorch/tests/test_utils.py::TestIsSkorchDataset::test_data_types[input_data2-False] PASSED [ 98%] skorch/tests/test_utils.py::TestIsSkorchDataset::test_data_types[input_data3-True] PASSED [ 99%] skorch/tests/test_utils.py::TestIsSkorchDataset::test_data_types[input_data4-True] PASSED [ 99%] skorch/tests/test_utils.py::TestTeeGenerator::test_returns_copies_of_generator PASSED [ 99%] skorch/tests/test_utils.py::TestInferPredictNonlinearity::test_infer_neural_net_classifier_default PASSED [ 99%] skorch/tests/test_utils.py::TestInferPredictNonlinearity::test_infer_neural_net_classifier_crossentropy_loss PASSED [ 99%] skorch/tests/test_utils.py::TestInferPredictNonlinearity::test_infer_neural_binary_net_classifier_default PASSED [ 99%] skorch/tests/test_utils.py::TestInferPredictNonlinearity::test_infer_neural_net_regressor_default PASSED [100%] ==================================== ERRORS ==================================== __________________ ERROR at setup of TestNeuralNet.test_clone __________________ self = net = [initialized]( module_=MLPModule( (nonlin): ReLU() (output_no....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net, data): # Careful, don't call additional fits on this, since that would have # side effects on other tests. X, y = data > return net.fit(X, y) skorch/tests/test_classifier.py:55: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.3268, -1.2775], [-0.6221, -0.7697], [-0.3016, -1.3456], [-0.4793, -0.9656], ..., -1.0870], [-0.4500, -1.0150], [-0.3346, -1.2574], [-0.4047, -1.1002]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ________ ERROR at setup of TestNeuralNet.test_predict_and_predict_proba ________ self = net = [initialized]( module_=MLPModule( (nonlin): ReLU() (output_no....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net, data): # Careful, don't call additional fits on this, since that would have # side effects on other tests. X, y = data > return net.fit(X, y) skorch/tests/test_classifier.py:55: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.3268, -1.2775], [-0.6221, -0.7697], [-0.3016, -1.3456], [-0.4793, -0.9656], ..., -1.0870], [-0.4500, -1.0150], [-0.3346, -1.2574], [-0.4047, -1.1002]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __________________ ERROR at setup of TestNeuralNet.test_score __________________ self = net = [initialized]( module_=MLPModule( (nonlin): ReLU() (output_no....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net, data): # Careful, don't call additional fits on this, since that would have # side effects on other tests. X, y = data > return net.fit(X, y) skorch/tests/test_classifier.py:55: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.3268, -1.2775], [-0.6221, -0.7697], [-0.3016, -1.3456], [-0.4793, -0.9656], ..., -1.0870], [-0.4500, -1.0150], [-0.3346, -1.2574], [-0.4047, -1.1002]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ______ ERROR at setup of TestNeuralNet.test_with_calibrated_classifier_cv ______ self = net = [initialized]( module_=MLPModule( (nonlin): ReLU() (output_no....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net, data): # Careful, don't call additional fits on this, since that would have # side effects on other tests. X, y = data > return net.fit(X, y) skorch/tests/test_classifier.py:55: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.3268, -1.2775], [-0.6221, -0.7697], [-0.3016, -1.3456], [-0.4793, -0.9656], ..., -1.0870], [-0.4500, -1.0150], [-0.3346, -1.2574], [-0.4047, -1.1002]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ___________________ ERROR at setup of TestNeuralNet.test_fit ___________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _________________ ERROR at setup of TestNeuralNet.test_forward _________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ___________ ERROR at setup of TestNeuralNet.test_forward_device_cpu ____________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _________________ ERROR at setup of TestNeuralNet.test_dropout _________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ____________ ERROR at setup of TestNeuralNet.test_pickle_save_load _____________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _ ERROR at setup of TestNeuralNet.test_save_params_invalid_argument_name_raises _ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _ ERROR at setup of TestNeuralNet.test_load_params_invalid_argument_name_raises _ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _ ERROR at setup of TestNeuralNet.test_save_params_with_f_params_and_f_module_raises _ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _ ERROR at setup of TestNeuralNet.test_load_params_with_f_params_and_f_module_raises _ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ____ ERROR at setup of TestNeuralNet.test_save_params_no_state_dict_raises _____ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ____ ERROR at setup of TestNeuralNet.test_load_params_no_state_dict_raises _____ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __ ERROR at setup of TestNeuralNet.test_save_params_unknown_attribute_raises ___ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __ ERROR at setup of TestNeuralNet.test_load_params_unknown_attribute_raises ___ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ________ ERROR at setup of TestNeuralNet.test_save_load_state_dict_file ________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ________ ERROR at setup of TestNeuralNet.test_save_load_state_dict_str _________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _ ERROR at setup of TestNeuralNet.test_save_load_state_dict_file_with_history_optimizer_criterion _ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit_adam(self, net_cls, module_cls, data): net = net_cls( module_cls, max_epochs=2, lr=0.1, optimizer=torch.optim.Adam) > net.fit(*data) skorch/tests/test_net.py:561: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/adam.py:66: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _ ERROR at setup of TestNeuralNet.test_save_load_state_dict_str_with_history_optimizer _ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit_adam(self, net_cls, module_cls, data): net = net_cls( module_cls, max_epochs=2, lr=0.1, optimizer=torch.optim.Adam) > net.fit(*data) skorch/tests/test_net.py:561: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/adam.py:66: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ____ ERROR at setup of TestNeuralNet.test_save_params_with_history_file_obj ____ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _ ERROR at setup of TestNeuralNet.test_save_params_with_history_file_path[str] _ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _ ERROR at setup of TestNeuralNet.test_save_params_with_history_file_path[Path] _ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __ ERROR at setup of TestNeuralNet.test_callback_is_called[on_train_begin-1] ___ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ___ ERROR at setup of TestNeuralNet.test_callback_is_called[on_train_end-1] ____ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __ ERROR at setup of TestNeuralNet.test_callback_is_called[on_epoch_begin-10] __ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ___ ERROR at setup of TestNeuralNet.test_callback_is_called[on_epoch_end-10] ___ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __ ERROR at setup of TestNeuralNet.test_callback_is_called[on_batch_begin-90] __ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ___ ERROR at setup of TestNeuralNet.test_callback_is_called[on_batch_end-90] ___ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __________ ERROR at setup of TestNeuralNet.test_history_correct_shape __________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __________ ERROR at setup of TestNeuralNet.test_history_default_keys ___________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ____________ ERROR at setup of TestNeuralNet.test_history_is_filled ____________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ______ ERROR at setup of TestNeuralNet.test_get_params_no_learned_params _______ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ___ ERROR at setup of TestNeuralNet.test_clone_results_in_uninitialized_net ____ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __ ERROR at setup of TestNeuralNet.test_set_lr_at_runtime_doesnt_reinitialize __ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ________ ERROR at setup of TestNeuralNet.test_set_lr_at_runtime_sets_lr ________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _ ERROR at setup of TestNeuralNet.test_set_lr_at_runtime_sets_lr_via_pgroup_0 __ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dummy_callback = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope='module') def net_fit(self, net_cls, module_cls, dummy_callback, data): # Careful, don't call additional fits or set_params on this, # since that would have side effects on other tests. X, y = data # We need a new instance of the net and cannot reuse the net # fixture, because otherwise fixture net and net_fit refer to # the same object; also, we cannot clone(net) because this # will result in the dummy_callback not being the mock anymore net = net_cls( module_cls, callbacks=[('dummy', dummy_callback)], max_epochs=10, lr=0.1, ) > return net.fit(X, y) skorch/tests/test_net.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __ ERROR at setup of TestLossScoring.test_nonnull_sample_weight_raises[mean] ___ self = net = [initialized]( module_=MLPModule( (nonlin): ReLU() (output_no....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope="module") def net_fit(self, net, data): X, y = data > return net.fit(X, y) skorch/tests/test_scoring.py:32: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _____ ERROR at setup of TestLossScoring.test_scored_net_output_type[mean] ______ self = scored_net = .ScoredNet'>[initialized]( module_=MLPModul....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope="module") def scored_net_fit(self, scored_net, data): X, y = data > return scored_net.fit(X, y) skorch/tests/test_scoring.py:57: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.7324, -0.6554], [-0.6946, -0.6917], [-0.6176, -0.7749], [-0.7759, -0.6168], ..., -0.5237], [-0.9074, -0.5168], [-0.8357, -0.5684], [-1.0259, -0.4439]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ________ ERROR at setup of TestLossScoring.test_score_on_net_fit[mean] _________ self = net = [initialized]( module_=MLPModule( (nonlin): ReLU() (output_no....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope="module") def net_fit(self, net, data): X, y = data > return net.fit(X, y) skorch/tests/test_scoring.py:32: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _ ERROR at setup of TestLossScoring.test_scored_net_matches_criterion_value[mean] _ self = scored_net = .ScoredNet'>[initialized]( module_=MLPModul....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope="module") def scored_net_fit(self, scored_net, data): X, y = data > return scored_net.fit(X, y) skorch/tests/test_scoring.py:57: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.7324, -0.6554], [-0.6946, -0.6917], [-0.6176, -0.7749], [-0.7759, -0.6168], ..., -0.5237], [-0.9074, -0.5168], [-0.8357, -0.5684], [-1.0259, -0.4439]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _ ERROR at setup of TestLossScoring.test_score_unknown_reduction_raises[mean] __ self = net = [initialized]( module_=MLPModule( (nonlin): ReLU() (output_no....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope="module") def net_fit(self, net, data): X, y = data > return net.fit(X, y) skorch/tests/test_scoring.py:32: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ___ ERROR at setup of TestLossScoring.test_nonnull_sample_weight_raises[sum] ___ self = net = [initialized]( module_=MLPModule( (nonlin): ReLU() (output_no....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope="module") def net_fit(self, net, data): X, y = data > return net.fit(X, y) skorch/tests/test_scoring.py:32: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'sum' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ______ ERROR at setup of TestLossScoring.test_scored_net_output_type[sum] ______ self = scored_net = .ScoredNet'>[initialized]( module_=MLPModul....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope="module") def scored_net_fit(self, scored_net, data): X, y = data > return scored_net.fit(X, y) skorch/tests/test_scoring.py:57: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.7324, -0.6554], [-0.6946, -0.6917], [-0.6176, -0.7749], [-0.7759, -0.6168], ..., -0.5237], [-0.9074, -0.5168], [-0.8357, -0.5684], [-1.0259, -0.4439]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'sum' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _________ ERROR at setup of TestLossScoring.test_score_on_net_fit[sum] _________ self = net = [initialized]( module_=MLPModule( (nonlin): ReLU() (output_no....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope="module") def net_fit(self, net, data): X, y = data > return net.fit(X, y) skorch/tests/test_scoring.py:32: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'sum' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _ ERROR at setup of TestLossScoring.test_scored_net_matches_criterion_value[sum] _ self = scored_net = .ScoredNet'>[initialized]( module_=MLPModul....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope="module") def scored_net_fit(self, scored_net, data): X, y = data > return scored_net.fit(X, y) skorch/tests/test_scoring.py:57: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.7324, -0.6554], [-0.6946, -0.6917], [-0.6176, -0.7749], [-0.7759, -0.6168], ..., -0.5237], [-0.9074, -0.5168], [-0.8357, -0.5684], [-1.0259, -0.4439]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'sum' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __ ERROR at setup of TestLossScoring.test_score_unknown_reduction_raises[sum] __ self = net = [initialized]( module_=MLPModule( (nonlin): ReLU() (output_no....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @pytest.fixture(scope="module") def net_fit(self, net, data): X, y = data > return net.fit(X, y) skorch/tests/test_scoring.py:32: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'sum' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError =================================== FAILURES =================================== __________________ TestNeuralNet.test_takes_log_with_nllloss ___________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_takes_log_with_nllloss(self, net_cls, module_cls, data): net = net_cls(module_cls, criterion=nn.NLLLoss, max_epochs=1) net.initialize() mock_loss = Mock(side_effect=nn.NLLLoss()) net.criterion_.forward = mock_loss > net.partial_fit(*data) # call partial_fit to avoid re-initialization skorch/tests/test_classifier.py:81: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3.9/unittest/mock.py:1093: in __call__ return self._mock_call(*args, **kwargs) /usr/lib/python3.9/unittest/mock.py:1097: in _mock_call return self._execute_mock_call(*args, **kwargs) /usr/lib/python3.9/unittest/mock.py:1158: in _execute_mock_call result = effect(*args, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _______________ TestNeuralNet.test_takes_no_log_without_nllloss ________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_takes_no_log_without_nllloss(self, net_cls, module_cls, data): net = net_cls(module_cls, criterion=nn.BCELoss, max_epochs=1) net.initialize() mock_loss = Mock(side_effect=nn.NLLLoss()) net.criterion_.forward = mock_loss > net.partial_fit(*data) # call partial_fit to avoid re-initialization skorch/tests/test_classifier.py:96: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3.9/unittest/mock.py:1093: in __call__ return self._mock_call(*args, **kwargs) /usr/lib/python3.9/unittest/mock.py:1097: in _mock_call return self._execute_mock_call(*args, **kwargs) /usr/lib/python3.9/unittest/mock.py:1158: in _execute_mock_call result = effect(*args, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[0.5515, 0.4485], [0.5248, 0.4752], [0.1004, 0.8996], [0.3882, 0.6118], [0.465...822, 0.5178], [0.4554, 0.5446], [0.5714, 0.4286], [0.3147, 0.6853]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ____________________ TestNeuralNet.test_high_learning_rate _____________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_high_learning_rate(self, net_cls, module_cls, data): # regression test for nan loss with high learning rates issue #481 net = net_cls(module_cls, max_epochs=2, lr=2, optimizer=torch.optim.Adam) > net.fit(*data) skorch/tests/test_classifier.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/adam.py:66: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _______________ TestNeuralNet.test_binary_classes_set_by_default _______________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_binary_classes_set_by_default(self, net_cls, module_cls, data): > net = net_cls(module_cls).fit(*data) skorch/tests/test_classifier.py:111: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ____________________ TestNetWithDict.test_fit_predict_proba ____________________ self = net = [initialized]( module_=MyModule( (dense): Linear(in_features=20, out_features=2, bias=True) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., 2.9419577 , -2.1910605 , 1.2443967 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_fit_predict_proba(self, net, data): X = {'X0': data[0], 'X1': data[1]} y = data[2] > net.fit(X, y) skorch/tests/test_dataset.py:356: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-3.4756e+00, -3.1432e-02], [-5.9755e-01, -7.9886e-01], [-3.1496e+00, -4.3814e-02], [-...6989e-01, -1.4416e+00], [-2.5379e+00, -8.2331e-02], [-6.8252e-01, -7.0388e-01]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ____________________ TestNetWithList.test_fit_predict_proba ____________________ self = net = [initialized]( module_=MyModule( (dense): Linear(in_features=20, out_features=2, bias=True) ), ) data = ([array([[-0.9658346 , -2.1890705 , 0.16985609, ..., 2.9419577 , -2.1910605 , 1.2443967 ], [-0.45476..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_fit_predict_proba(self, net, data): X, y = data > net.fit(X, y) skorch/tests/test_dataset.py:409: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-3.4756e+00, -3.1432e-02], [-5.9755e-01, -7.9886e-01], [-3.1496e+00, -4.3814e-02], [-...6989e-01, -1.4416e+00], [-2.5379e+00, -8.2331e-02], [-6.8252e-01, -7.0388e-01]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ___________________ TestNetWithPandas.test_fit_predict_proba ___________________ self = net = [initialized]( module_=MyModule( (dense): Linear(in_features=20, out_features=2, bias=True) ), ) data = ( 0 1 2 ... 17 18 19 0 -0.965835 -2.189070 0.169856 ... -0.896453..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_fit_predict_proba(self, net, data): X, y = data > net.fit(X, y) skorch/tests/test_dataset.py:462: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-3.4756e+00, -3.1432e-02], [-5.9755e-01, -7.9886e-01], [-3.1496e+00, -4.3814e-02], [-...6989e-01, -1.4416e+00], [-2.5379e+00, -8.2331e-02], [-6.8252e-01, -7.0388e-01]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ________________ TestSliceDict.test_grid_search_with_dict_works ________________ self = sldict_cls = data = (array([[-2.7252579 , 0.69913614, -0.02816787, ..., 1.1250238 , -0.40577692, -0.9420936 ], [-0.468117..., 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0])) classifier_module = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) def test_grid_search_with_dict_works( self, sldict_cls, data, classifier_module): from sklearn.model_selection import GridSearchCV from skorch import NeuralNetClassifier net = NeuralNetClassifier(classifier_module) X, y = data X = sldict_cls(X=X) params = { 'lr': [0.01, 0.02], 'max_epochs': [10, 20], } gs = GridSearchCV(net, params, refit=True, cv=3, scoring='accuracy', iid=True) > gs.fit(X, y) skorch/tests/test_helper.py:178: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/sklearn/utils/validation.py:72: in inner_f return f(**kwargs) /usr/lib/python3/dist-packages/sklearn/model_selection/_search.py:765: in fit self.best_estimator_.fit(X, y, **fit_params) skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.4947, -0.9409], [-0.8022, -0.5948], [-0.5016, -0.9302], [-0.5006, -0.9318], ..., -0.8363], [-0.7080, -0.6785], [-0.1229, -2.1575], [-0.6271, -0.7638]], grad_fn=) target = tensor([1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, ..., 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __________________ TestSliceDataset.test_fit_with_slds_works ___________________ self = slds = y = array([1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1,...1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0]) classifier_module = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) def test_fit_with_slds_works(self, slds, y, classifier_module): from skorch import NeuralNetClassifier net = NeuralNetClassifier(classifier_module) > net.fit(slds, y) # does not raise skorch/tests/test_helper.py:405: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.4543, -1.0076], [-0.5769, -0.8247], [-1.1148, -0.3975], [-0.8853, -0.5321], ..., -0.7124], [-1.2845, -0.3240], [-0.6806, -0.7059], [-1.1275, -0.3913]], grad_fn=) target = tensor([1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, ..., 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ___________ TestSliceDataset.test_fit_with_slds_without_valid_works ____________ self = slds = y = array([1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1,...1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0]) classifier_module = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) def test_fit_with_slds_without_valid_works(self, slds, y, classifier_module): from skorch import NeuralNetClassifier net = NeuralNetClassifier(classifier_module, train_split=False) > net.fit(slds, y) # does not raise skorch/tests/test_helper.py:410: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.7406, -0.6478], [-0.5642, -0.8412], [-1.4771, -0.2592], [-0.7809, -0.6125], ..., -0.7126], [-0.4453, -1.0234], [-0.7900, -0.6049], [-0.5756, -0.8264]], grad_fn=) target = tensor([1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, ... 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _____ TestSliceDataset.test_grid_search_with_slds_and_internal_split_works _____ self = slds = y = array([1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1,...1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0]) classifier_module = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) def test_grid_search_with_slds_and_internal_split_works( self, slds, y, classifier_module): from sklearn.model_selection import GridSearchCV from skorch import NeuralNetClassifier net = NeuralNetClassifier(classifier_module) params = { 'lr': [0.01, 0.02], 'max_epochs': [10, 20], } gs = GridSearchCV(net, params, refit=True, cv=3, scoring='accuracy', iid=True) > gs.fit(slds, y) # does not raise skorch/tests/test_helper.py:440: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/sklearn/utils/validation.py:72: in inner_f return f(**kwargs) /usr/lib/python3/dist-packages/sklearn/model_selection/_search.py:765: in fit self.best_estimator_.fit(X, y, **fit_params) skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.4947, -0.9409], [-0.8022, -0.5948], [-0.5016, -0.9302], [-0.5006, -0.9318], ..., -0.8363], [-0.7080, -0.6785], [-0.1229, -2.1575], [-0.6271, -0.7638]], grad_fn=) target = tensor([1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, ..., 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _____________ TestDataFrameTransformer.test_fit_transform_defaults _____________ self = transformer_cls = df = col_floats col_ints col_cats 0 0.1 11 a 1 0.2 11 b 2 0.3 10 a def test_fit_transform_defaults(self, transformer_cls, df): expected = { 'X': np.asarray([ [0.1, 11.0], [0.2, 11.0], [0.3, 10.0], ]).astype(np.float32), 'col_cats': np.asarray([0, 1, 0]), } Xt = transformer_cls().fit_transform(df) > assert_dicts_equal(Xt, expected) skorch/tests/test_helper.py:517: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ d0 = {'X': array([[ 0.1, 11. ], [ 0.2, 11. ], [ 0.3, 10. ]], dtype=float32), 'col_cats': array([0, 1, 0], dtype=int64)} d1 = {'X': array([[ 0.1, 11. ], [ 0.2, 11. ], [ 0.3, 10. ]], dtype=float32), 'col_cats': array([0, 1, 0])} def assert_dicts_equal(d0, d1): assert d0.keys() == d1.keys() for key in d0.keys(): val0, val1 = d0[key], d1[key] np.testing.assert_allclose(val0, val1) > assert val0.dtype == val1.dtype E AssertionError: assert dtype('int64') == dtype('int32') E +dtype('int64') E -dtype('int32') skorch/tests/test_helper.py:20: AssertionError ___________ TestDataFrameTransformer.test_fit_and_transform_defaults ___________ self = transformer_cls = df = col_floats col_ints col_cats 0 0.1 11 a 1 0.2 11 b 2 0.3 10 a def test_fit_and_transform_defaults(self, transformer_cls, df): expected = { 'X': np.asarray([ [0.1, 11.0], [0.2, 11.0], [0.3, 10.0], ]).astype(np.float32), 'col_cats': np.asarray([0, 1, 0]), } Xt = transformer_cls().fit(df).transform(df) > assert_dicts_equal(Xt, expected) skorch/tests/test_helper.py:529: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ d0 = {'X': array([[ 0.1, 11. ], [ 0.2, 11. ], [ 0.3, 10. ]], dtype=float32), 'col_cats': array([0, 1, 0], dtype=int64)} d1 = {'X': array([[ 0.1, 11. ], [ 0.2, 11. ], [ 0.3, 10. ]], dtype=float32), 'col_cats': array([0, 1, 0])} def assert_dicts_equal(d0, d1): assert d0.keys() == d1.keys() for key in d0.keys(): val0, val1 = d0[key], d1[key] np.testing.assert_allclose(val0, val1) > assert val0.dtype == val1.dtype E AssertionError: assert dtype('int64') == dtype('int32') E +dtype('int64') E -dtype('int32') skorch/tests/test_helper.py:20: AssertionError ____ TestDataFrameTransformer.test_fit_transform_defaults_two_categoricals _____ self = transformer_cls = df = col_floats col_ints col_cats col_foo 0 0.1 11 a 11 1 0.2 11 b 11 2 0.3 10 a 10 def test_fit_transform_defaults_two_categoricals( self, transformer_cls, df): expected = { 'X': np.asarray([ [0.1, 11.0], [0.2, 11.0], [0.3, 10.0], ]).astype(np.float32), 'col_cats': np.asarray([0, 1, 0]), 'col_foo': np.asarray([1, 1, 0]), } df = df.assign(col_foo=df['col_ints'].astype('category')) Xt = transformer_cls().fit_transform(df) > assert_dicts_equal(Xt, expected) skorch/tests/test_helper.py:544: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ d0 = {'X': array([[ 0.1, 11. ], [ 0.2, 11. ], [ 0.3, 10. ]], dtype=float32), 'col_cats': array([0, 1, 0], dtype=int64), 'col_foo': array([1, 1, 0], dtype=int64)} d1 = {'X': array([[ 0.1, 11. ], [ 0.2, 11. ], [ 0.3, 10. ]], dtype=float32), 'col_cats': array([0, 1, 0]), 'col_foo': array([1, 1, 0])} def assert_dicts_equal(d0, d1): assert d0.keys() == d1.keys() for key in d0.keys(): val0, val1 = d0[key], d1[key] np.testing.assert_allclose(val0, val1) > assert val0.dtype == val1.dtype E AssertionError: assert dtype('int64') == dtype('int32') E +dtype('int64') E -dtype('int32') skorch/tests/test_helper.py:20: AssertionError ________ TestDataFrameTransformer.test_fit_transform_int_as_categorical ________ self = transformer_cls = df = col_floats col_ints col_cats 0 0.1 11 a 1 0.2 11 b 2 0.3 10 a def test_fit_transform_int_as_categorical(self, transformer_cls, df): expected = { 'X': np.asarray([0.1, 0.2, 0.3]).astype(np.float32).reshape(-1, 1), 'col_ints': np.asarray([1, 1, 0]), 'col_cats': np.asarray([0, 1, 0]), } Xt = transformer_cls(treat_int_as_categorical=True).fit_transform(df) > assert_dicts_equal(Xt, expected) skorch/tests/test_helper.py:553: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ d0 = {'X': array([[0.1], [0.2], [0.3]], dtype=float32), 'col_cats': array([0, 1, 0], dtype=int64), 'col_ints': array([1, 1, 0], dtype=int64)} d1 = {'X': array([[0.1], [0.2], [0.3]], dtype=float32), 'col_cats': array([0, 1, 0]), 'col_ints': array([1, 1, 0])} def assert_dicts_equal(d0, d1): assert d0.keys() == d1.keys() for key in d0.keys(): val0, val1 = d0[key], d1[key] np.testing.assert_allclose(val0, val1) > assert val0.dtype == val1.dtype E AssertionError: assert dtype('int64') == dtype('int32') E +dtype('int64') E -dtype('int32') skorch/tests/test_helper.py:20: AssertionError _______________ TestDataFrameTransformer.test_fit_transform_no_X _______________ self = transformer_cls = df = col_ints col_cats 0 11 a 1 11 b 2 10 a def test_fit_transform_no_X(self, transformer_cls, df): df = df[['col_ints', 'col_cats']] # no float type present expected = { 'col_ints': np.asarray([1, 1, 0]), 'col_cats': np.asarray([0, 1, 0]), } Xt = transformer_cls(treat_int_as_categorical=True).fit_transform(df) > assert_dicts_equal(Xt, expected) skorch/tests/test_helper.py:562: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ d0 = {'col_cats': array([0, 1, 0], dtype=int64), 'col_ints': array([1, 1, 0], dtype=int64)} d1 = {'col_cats': array([0, 1, 0]), 'col_ints': array([1, 1, 0])} def assert_dicts_equal(d0, d1): assert d0.keys() == d1.keys() for key in d0.keys(): val0, val1 = d0[key], d1[key] np.testing.assert_allclose(val0, val1) > assert val0.dtype == val1.dtype E AssertionError: assert dtype('int64') == dtype('int32') E +dtype('int64') E -dtype('int32') skorch/tests/test_helper.py:20: AssertionError _________ TestDataFrameTransformer.test_fit_and_predict_with_pipeline __________ self = pipe = Pipeline(steps=[('transform', DataFrameTransformer()), ('net', pipe.fit(df, y) skorch/tests/test_helper.py:672: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/sklearn/pipeline.py:335: in fit self._final_estimator.fit(Xt, y, **fit_params_last_step) skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-2.5491, -0.0814], [-2.0619, -0.1361], [-2.2955, -0.1062]], grad_fn=) target = tensor([0, 0, 1], dtype=torch.int32), weight = None size_average = None, ignore_index = -100, reduce = None, reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _______________ TestNeuralNet.test_train_net_after_copy[pickle] ________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) copy_method = 'pickle' @pytest.mark.parametrize("copy_method", ["pickle", "copy.deepcopy"]) def test_train_net_after_copy(self, net_cls, module_cls, data, copy_method): # This test comes from [issue #317], and makes sure that models # can be trained after copying (which is really pickling). # # [issue #317]:https://github.com/skorch-dev/skorch/issues/317 X, y = data n1 = net_cls(module_cls) > n1.partial_fit(X, y, epochs=1) skorch/tests/test_net.py:136: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ____________ TestNeuralNet.test_train_net_after_copy[copy.deepcopy] ____________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) copy_method = 'copy.deepcopy' @pytest.mark.parametrize("copy_method", ["pickle", "copy.deepcopy"]) def test_train_net_after_copy(self, net_cls, module_cls, data, copy_method): # This test comes from [issue #317], and makes sure that models # can be trained after copying (which is really pickling). # # [issue #317]:https://github.com/skorch-dev/skorch/issues/317 X, y = data n1 = net_cls(module_cls) > n1.partial_fit(X, y, epochs=1) skorch/tests/test_net.py:136: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ________________________ TestNeuralNet.test_net_learns _________________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_net_learns(self, net_cls, module_cls, data): X, y = data net = net_cls( module_cls, max_epochs=10, lr=0.1, ) > net.fit(X, y) skorch/tests/test_net.py:303: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ____________ TestNeuralNet.test_save_and_load_from_checkpoint[True] ____________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) checkpoint_cls = tmpdir = local('/tmp/pytest-of-pbuilder1/pytest-0/test_save_and_load_from_checkp0') explicit_init = True @pytest.mark.parametrize("explicit_init", [True, False]) def test_save_and_load_from_checkpoint( self, net_cls, module_cls, data, checkpoint_cls, tmpdir, explicit_init): skorch_dir = tmpdir.mkdir('skorch') f_params = skorch_dir.join('params.pt') f_optimizer = skorch_dir.join('optimizer.pt') f_criterion = skorch_dir.join('criterion.pt') f_history = skorch_dir.join('history.json') cp = checkpoint_cls( monitor=None, f_params=str(f_params), f_optimizer=str(f_optimizer), f_criterion=str(f_criterion), f_history=str(f_history)) net = net_cls( module_cls, max_epochs=4, lr=0.1, optimizer=torch.optim.Adam, callbacks=[cp]) > net.fit(*data) skorch/tests/test_net.py:664: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/adam.py:66: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ___________ TestNeuralNet.test_save_and_load_from_checkpoint[False] ____________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) checkpoint_cls = tmpdir = local('/tmp/pytest-of-pbuilder1/pytest-0/test_save_and_load_from_checkp1') explicit_init = False @pytest.mark.parametrize("explicit_init", [True, False]) def test_save_and_load_from_checkpoint( self, net_cls, module_cls, data, checkpoint_cls, tmpdir, explicit_init): skorch_dir = tmpdir.mkdir('skorch') f_params = skorch_dir.join('params.pt') f_optimizer = skorch_dir.join('optimizer.pt') f_criterion = skorch_dir.join('criterion.pt') f_history = skorch_dir.join('history.json') cp = checkpoint_cls( monitor=None, f_params=str(f_params), f_optimizer=str(f_optimizer), f_criterion=str(f_criterion), f_history=str(f_history)) net = net_cls( module_cls, max_epochs=4, lr=0.1, optimizer=torch.optim.Adam, callbacks=[cp]) > net.fit(*data) skorch/tests/test_net.py:664: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/adam.py:66: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ____________ TestNeuralNet.test_checkpoint_with_prefix_and_dirname _____________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) checkpoint_cls = tmpdir = local('/tmp/pytest-of-pbuilder1/pytest-0/test_checkpoint_with_prefix_an0') def test_checkpoint_with_prefix_and_dirname( self, net_cls, module_cls, data, checkpoint_cls, tmpdir): exp_dir = tmpdir.mkdir('skorch') exp_basedir = exp_dir.join('exp1') cp = checkpoint_cls( monitor=None, fn_prefix='unet_', dirname=str(exp_basedir)) net = net_cls( module_cls, max_epochs=4, lr=0.1, optimizer=torch.optim.Adam, callbacks=[cp]) > net.fit(*data) skorch/tests/test_net.py:696: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/adam.py:66: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _________ TestNeuralNet.test_save_and_load_from_checkpoint_formatting __________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) checkpoint_cls = tmpdir = local('/tmp/pytest-of-pbuilder1/pytest-0/test_save_and_load_from_checkp2') def test_save_and_load_from_checkpoint_formatting( self, net_cls, module_cls, data, checkpoint_cls, tmpdir): def epoch_3_scorer(net, *_): return 1 if net.history[-1, 'epoch'] == 3 else 0 from skorch.callbacks import EpochScoring scoring = EpochScoring( scoring=epoch_3_scorer, on_train=True) skorch_dir = tmpdir.mkdir('skorch') f_params = skorch_dir.join( 'model_epoch_{last_epoch[epoch]}.pt') f_optimizer = skorch_dir.join( 'optimizer_epoch_{last_epoch[epoch]}.pt') f_criterion = skorch_dir.join( 'criterion_epoch_{last_epoch[epoch]}.pt') f_history = skorch_dir.join( 'history.json') cp = checkpoint_cls( monitor='epoch_3_scorer', f_params=str(f_params), f_optimizer=str(f_optimizer), f_criterion=str(f_criterion), f_history=str(f_history)) net = net_cls( module_cls, max_epochs=5, lr=0.1, optimizer=torch.optim.Adam, callbacks=[ ('my_score', scoring), cp ]) > net.fit(*data) skorch/tests/test_net.py:734: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/adam.py:66: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _____________________ TestNeuralNet.test_set_params_works ______________________ self = net = [initialized]( module_=MLPModule( (nonlin): ReLU() (output_no....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_set_params_works(self, net, data): X, y = data > net.fit(X, y) skorch/tests/test_net.py:947: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __________ TestNeuralNet.test_changing_model_reinitializes_optimizer ___________ self = net = [initialized]( module_=MLPModule( (nonlin): ReLU() (output_no....5, inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_changing_model_reinitializes_optimizer(self, net, data): # The idea is that we change the model using `set_params` to # add parameters. Since the optimizer depends on the model # parameters it needs to be reinitialized. X, y = data net.set_params(module__nonlin=nn.ReLU()) > net.fit(X, y) skorch/tests/test_net.py:995: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-1.2334, -0.3443], [-0.6289, -0.7618], [-1.6833, -0.2055], [-0.4345, -1.0429], ..., -0.6201], [-1.1077, -0.4010], [-0.7056, -0.6808], [-0.6610, -0.7264]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ----------------------------- Captured stdout call ----------------------------- Re-initializing module because the following parameters were re-set: nonlin. Re-initializing optimizer. Re-initializing module because the following parameters were re-set: nonlin. Re-initializing optimizer. ___________________ TestNeuralNet.test_module_params_in_init ___________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_module_params_in_init(self, net_cls, module_cls, data): X, y = data net = net_cls( module=module_cls, module__hidden_units=20, module__nonlin=nn.Tanh(), ) > net.fit(X, y) skorch/tests/test_net.py:1105: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.3933, -1.1234], [-0.9242, -0.5056], [-0.9528, -0.4872], [-0.9131, -0.5130], ..., -1.0927], [-0.6557, -0.7321], [-0.3594, -1.1976], [-0.6538, -0.7341]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ____________________ TestNeuralNet.test_in_sklearn_pipeline ____________________ self = pipe = Pipeline(steps=[('scale', StandardScaler()), ('net', pipe.fit(X, y) skorch/tests/test_net.py:1254: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/sklearn/pipeline.py:335: in fit self._final_estimator.fit(Xt, y, **fit_params_last_step) skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5869, -0.8121], [-0.6343, -0.7556], [-1.1516, -0.3800], [-0.8781, -0.5371], ..., -0.6121], [-0.7542, -0.6356], [-0.5812, -0.8192], [-1.2586, -0.3342]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ----------------------------- Captured stdout call ----------------------------- Re-initializing module because the following parameters were re-set: nonlin. Re-initializing optimizer. _____________________ TestNeuralNet.test_grid_search_works _____________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_grid_search_works(self, net_cls, module_cls, data): net = net_cls(module_cls) X, y = data params = { 'lr': [0.01, 0.02], 'max_epochs': [10, 20], 'module__hidden_units': [10, 20], } gs = GridSearchCV(net, params, refit=True, cv=3, scoring='accuracy', iid=True) > gs.fit(X[:100], y[:100]) # for speed skorch/tests/test_net.py:1269: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/sklearn/utils/validation.py:72: in inner_f return f(**kwargs) /usr/lib/python3/dist-packages/sklearn/model_selection/_search.py:765: in fit self.best_estimator_.fit(X, y, **fit_params) skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.4494, -1.0162], [-0.0521, -2.9810], [-0.2975, -1.3575], [-0.5069, -0.9221], ..., -2.3735], [-0.3894, -1.1314], [-0.4122, -1.0853], [-0.5051, -0.9249]], grad_fn=) target = tensor([0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, ..., 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _______________________ TestNeuralNet.test_net_no_valid ________________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_net_no_valid(self, net_cls, module_cls, data): net = net_cls( module_cls, max_epochs=10, lr=0.1, train_split=None, ) X, y = data > net.fit(X, y) skorch/tests/test_net.py:1304: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.4030, -1.1035], [-0.6978, -0.6885], [-0.7526, -0.6370], [-0.7386, -0.6497], ..., -0.3595], [-0.6340, -0.7560], [-0.5740, -0.8284], [-0.8443, -0.5618]], grad_fn=) target = tensor([0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, ..., 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __________________ TestNeuralNet.test_with_initialized_module __________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_with_initialized_module(self, net_cls, module_cls, data): X, y = data net = net_cls(module_cls(), max_epochs=1) > net.fit(X, y) skorch/tests/test_net.py:1390: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ___________ TestNeuralNet.test_with_initialized_module_other_params ____________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_with_initialized_module_other_params(self, net_cls, module_cls, data): X, y = data net = net_cls(module_cls(), max_epochs=1, module__hidden_units=123) > net.fit(X, y) skorch/tests/test_net.py:1395: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-1.3264, nan], [-0.0976, -2.1083], [-2.5823, nan], [ nan, nan], ..., nan], [ nan, -2.2682], [ nan, nan], [-4.1603, -2.0727]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ----------------------------- Captured stdout call ----------------------------- Re-initializing module because the following parameters were re-set: hidden_units. ____________ TestNeuralNet.test_with_initialized_module_non_default ____________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) capsys = <_pytest.capture.CaptureFixture object at 0xe1914e68> def test_with_initialized_module_non_default( self, net_cls, module_cls, data, capsys): X, y = data net = net_cls(module_cls(hidden_units=123), max_epochs=1) > net.fit(X, y) skorch/tests/test_net.py:1403: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.9527, -0.4873], [-1.1877, -0.3637], [-0.9384, -0.4963], [-0.6876, -0.6987], ..., -0.3677], [-0.8482, -0.5589], [-0.9417, -0.4943], [-1.1839, -0.3654]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _____________ TestNeuralNet.test_message_fit_with_initialized_net ______________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) capsys = <_pytest.capture.CaptureFixture object at 0xe1779ce8> def test_message_fit_with_initialized_net( self, net_cls, module_cls, data, capsys): net = net_cls(module_cls).initialize() > net.fit(*data) skorch/tests/test_net.py:1413: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-1.2334, -0.3443], [-0.6289, -0.7618], [-1.6833, -0.2055], [-0.4345, -1.0429], ..., -0.6201], [-1.1077, -0.4010], [-0.7056, -0.6808], [-0.6610, -0.7264]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ----------------------------- Captured stdout call ----------------------------- Re-initializing module. Re-initializing optimizer. ________________ TestNeuralNet.test_with_initialized_sequential ________________ self = net_cls = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) capsys = <_pytest.capture.CaptureFixture object at 0xe172c148> def test_with_initialized_sequential(self, net_cls, data, capsys): X, y = data module = nn.Sequential( nn.Linear(X.shape[1], 10), nn.ReLU(), nn.Linear(10, 2), nn.Softmax(dim=-1), ) net = net_cls(module, max_epochs=1) > net.fit(X, y) skorch/tests/test_net.py:1463: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-1.5993, -0.2257], [-0.6843, -0.7021], [-1.9125, -0.1598], [-0.6623, -0.7250], ..., -0.7000], [-0.6595, -0.7279], [-1.2315, -0.3451], [-0.4969, -0.9375]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __________________ TestNeuralNet.test_call_fit_twice_retrains __________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_call_fit_twice_retrains(self, net_cls, module_cls, data): # test that after second fit call, even without entering the # fit loop, parameters have changed (because the module was # re-initialized) X, y = data[0][:100], data[1][:100] > net = net_cls(module_cls, warm_start=False).fit(X, y) skorch/tests/test_net.py:1473: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.6494, -0.7389], [-0.5963, -0.8004], [-0.7641, -0.6269], [-0.6973, -0.6890], ..., -0.4024], [-0.8529, -0.5554], [-0.7667, -0.6246], [-0.7770, -0.6158]], grad_fn=) target = tensor([0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, ..., 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _________________ TestNeuralNet.test_call_fit_twice_warmstart __________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_call_fit_twice_warmstart(self, net_cls, module_cls, data): X, y = data[0][:100], data[1][:100] > net = net_cls(module_cls, warm_start=True).fit(X, y) skorch/tests/test_net.py:1486: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.6494, -0.7389], [-0.5963, -0.8004], [-0.7641, -0.6269], [-0.6973, -0.6890], ..., -0.4024], [-0.8529, -0.5554], [-0.7667, -0.6246], [-0.7770, -0.6158]], grad_fn=) target = tensor([0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, ..., 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __________________ TestNeuralNet.test_partial_fit_first_call ___________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_partial_fit_first_call(self, net_cls, module_cls, data): # It should be possible to partial_fit without calling fit first. X, y = data[0][:100], data[1][:100] # does not raise > net_cls(module_cls, warm_start=True).partial_fit(X, y) skorch/tests/test_net.py:1501: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.6494, -0.7389], [-0.5963, -0.8004], [-0.7641, -0.6269], [-0.6973, -0.6890], ..., -0.4024], [-0.8529, -0.5554], [-0.7667, -0.6246], [-0.7770, -0.6158]], grad_fn=) target = tensor([0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, ..., 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ________________ TestNeuralNet.test_call_partial_fit_after_fit _________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_call_partial_fit_after_fit(self, net_cls, module_cls, data): X, y = data[0][:100], data[1][:100] > net = net_cls(module_cls, warm_start=False).fit(X, y) skorch/tests/test_net.py:1505: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.6494, -0.7389], [-0.5963, -0.8004], [-0.7641, -0.6269], [-0.6973, -0.6890], ..., -0.4024], [-0.8529, -0.5554], [-0.7667, -0.6246], [-0.7770, -0.6158]], grad_fn=) target = tensor([0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, ..., 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _________ TestNeuralNet.test_net_initialized_with_custom_dataset_args __________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) dataset_cls = def test_net_initialized_with_custom_dataset_args( self, net_cls, module_cls, data, dataset_cls): side_effect = [] class MyDataset(dataset_cls): def __init__(self, *args, foo, **kwargs): super().__init__(*args, **kwargs) side_effect.append(foo) net = net_cls( module_cls, dataset=MyDataset, dataset__foo=123, max_epochs=1, ) > net.fit(*data) skorch/tests/test_net.py:1541: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __________ TestNeuralNet.test_net_initialized_with_initalized_dataset __________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) dataset_cls = @pytest.mark.xfail(raises=ValueError) def test_net_initialized_with_initalized_dataset( self, net_cls, module_cls, data, dataset_cls): net = net_cls( module_cls, dataset=dataset_cls(*data), max_epochs=1, # Disable caching to highlight the issue with this # test case (mismatching size between y values) callbacks__valid_acc__use_caching=False, ) # FIXME: When dataset is initialized, X and y do not matter # anymore > net.fit(*data) # should not raise skorch/tests/test_net.py:1558: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __________ TestNeuralNet.test_net_initialized_with_partialed_dataset ___________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) dataset_cls = def test_net_initialized_with_partialed_dataset( self, net_cls, module_cls, data, dataset_cls): X, y = data net = net_cls( module_cls, dataset=partial(dataset_cls, length=len(y)), train_split=None, max_epochs=1, ) > net.fit(X, y) # does not raise skorch/tests/test_net.py:1569: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.4030, -1.1035], [-0.6978, -0.6885], [-0.7526, -0.6370], [-0.7386, -0.6497], ..., -0.3595], [-0.6340, -0.7560], [-0.5740, -0.8284], [-0.8443, -0.5618]], grad_fn=) target = tensor([0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, ..., 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _____________________ TestNeuralNet.test_repr_fitted_works _____________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_repr_fitted_works(self, net_cls, module_cls, data): X, y = data net = net_cls( module_cls, module__hidden_units=11, module__nonlin=nn.PReLU(), ) > net.fit(X[:50], y[:50]) skorch/tests/test_net.py:1634: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.7808, -0.6125], [-0.5231, -0.8982], [-0.3669, -1.1804], [-0.5699, -0.8337], ..., -0.2112], [-0.1576, -1.9255], [-0.6179, -0.7745], [-0.7719, -0.6201]], grad_fn=) target = tensor([0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ________________ TestNeuralNet.test_fit_params_passed_to_module ________________ self = net_cls = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_fit_params_passed_to_module(self, net_cls, data): from skorch.toy import MLPModule X, y = data side_effect = [] class FPModule(MLPModule): # pylint: disable=arguments-differ def forward(self, X, **fit_params): side_effect.append(fit_params) return super().forward(X) net = net_cls(FPModule, max_epochs=1, batch_size=50, train_split=None) # remove callbacks to have better control over side_effect net.initialize() net.callbacks_ = [] > net.fit(X[:100], y[:100], foo=1, bar=2) skorch/tests/test_net.py:1674: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-2.5768, nan], [-1.7573, nan], [ nan, nan], [ nan, nan], ..., -1.8077], [ nan, nan], [ nan, nan], [-0.7250, -0.2425]], grad_fn=) target = tensor([0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ----------------------------- Captured stdout call ----------------------------- Re-initializing module. Re-initializing optimizer. __________ TestNeuralNet.test_fit_params_passed_to_module_in_pipeline __________ self = net_cls = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_fit_params_passed_to_module_in_pipeline(self, net_cls, data): from skorch.toy import MLPModule X, y = data side_effect = [] class FPModule(MLPModule): # pylint: disable=arguments-differ def forward(self, X, **fit_params): side_effect.append(fit_params) return super().forward(X) net = net_cls(FPModule, max_epochs=1, batch_size=50, train_split=None) net.initialize() net.callbacks_ = [] pipe = Pipeline([ ('net', net), ]) > pipe.fit(X[:100], y[:100], net__foo=1, net__bar=2) skorch/tests/test_net.py:1701: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/sklearn/pipeline.py:335: in fit self._final_estimator.fit(Xt, y, **fit_params_last_step) skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-2.5768, nan], [-1.7573, nan], [ nan, nan], [ nan, nan], ..., -1.8077], [ nan, nan], [ nan, nan], [-0.7250, -0.2425]], grad_fn=) target = tensor([0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ----------------------------- Captured stdout call ----------------------------- Re-initializing module. Re-initializing optimizer. _____________ TestNeuralNet.test_fit_params_passed_to_train_split ______________ self = net_cls = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_fit_params_passed_to_train_split(self, net_cls, data): from skorch.toy import MLPModule X, y = data side_effect = [] # pylint: disable=unused-argument def fp_train_split(dataset, y=None, **fit_params): side_effect.append(fit_params) return dataset, dataset class FPModule(MLPModule): # pylint: disable=unused-argument,arguments-differ def forward(self, X, **fit_params): return super().forward(X) net = net_cls( FPModule, max_epochs=1, batch_size=50, train_split=fp_train_split, ) net.initialize() net.callbacks_ = [] > net.fit(X[:100], y[:100], foo=1, bar=2) skorch/tests/test_net.py:1734: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-2.5768, nan], [-1.7573, nan], [ nan, nan], [ nan, nan], ..., -1.8077], [ nan, nan], [ nan, nan], [-0.7250, -0.2425]], grad_fn=) target = tensor([0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ----------------------------- Captured stdout call ----------------------------- Re-initializing module. Re-initializing optimizer. _________________ TestNeuralNet.test_data_dict_and_fit_params __________________ self = net_cls = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_data_dict_and_fit_params(self, net_cls, data): from skorch.toy import MLPModule X, y = data class FPModule(MLPModule): # pylint: disable=unused-argument,arguments-differ def forward(self, X0, X1, **fit_params): assert fit_params.get('foo') == 3 return super().forward(X0) net = net_cls(FPModule, max_epochs=1, batch_size=50, train_split=None) # does not raise > net.fit({'X0': X, 'X1': X}, y, foo=3) skorch/tests/test_net.py:1754: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[ nan, -2.3550], [-1.8285, -1.3605], [-0.2043, nan], [-1.7551, -2.7006], ..., -3.3296], [-1.0941, -1.6583], [-1.0723, -5.5980], [-2.8597, -0.7945]], grad_fn=) target = tensor([0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _____________________ TestNeuralNet.test_fit_with_dataset ______________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) dataset_cls = def test_fit_with_dataset(self, net_cls, module_cls, data, dataset_cls): ds = dataset_cls(*data) net = net_cls(module_cls, max_epochs=1) > net.fit(ds, data[1]) skorch/tests/test_net.py:1778: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _____ TestNeuralNet.test_fit_with_dataset_X_y_inaccessible_does_not_raise ______ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_fit_with_dataset_X_y_inaccessible_does_not_raise( self, net_cls, module_cls, data): class MyDataset(torch.utils.data.Dataset): """Dataset with inaccessible X and y""" def __init__(self, X, y): self.xx = X # incorrect attribute name self.yy = y # incorrect attribute name def __len__(self): return len(self.xx) def __getitem__(self, i): return self.xx[i], self.yy[i] ds = MyDataset(*data) net = net_cls(module_cls, max_epochs=1) > net.fit(ds, data[1]) # does not raise skorch/tests/test_net.py:1807: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ____________ TestNeuralNet.test_fit_with_dataset_without_explicit_y ____________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) dataset_cls = data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_fit_with_dataset_without_explicit_y( self, net_cls, module_cls, dataset_cls, data): from skorch.dataset import CVSplit net = net_cls( module_cls, max_epochs=1, train_split=CVSplit(stratified=False), ) ds = dataset_cls(*data) > net.fit(ds, None) # does not raise skorch/tests/test_net.py:1819: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5951, -0.8018], [-0.3171, -1.3028], [-0.6860, -0.7003], [-0.7940, -0.6015], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _____________ TestNeuralNet.test_setting_callback_to_none_possible _____________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_setting_callback_to_none_possible(self, net_cls, module_cls, data): from skorch.callbacks import Callback X, y = data[0][:30], data[1][:30] # accelerate test side_effects = [] class DummyCallback(Callback): def __init__(self, i): self.i = i # pylint: disable=unused-argument, arguments-differ def on_epoch_end(self, *args, **kwargs): side_effects.append(self.i) net = net_cls( module_cls, max_epochs=2, callbacks=[ ('cb0', DummyCallback(0)), ('cb1', DummyCallback(1)), ('cb2', DummyCallback(2)), ], ) > net.fit(X, y) skorch/tests/test_net.py:2016: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5699, -0.8338], [-0.7002, -0.6861], [-1.5261, -0.2451], [-0.6793, -0.7072], ..., -0.6290], [-0.9410, -0.4947], [-0.2619, -1.4679], [-0.8194, -0.5811]], grad_fn=) target = tensor([0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _________________ TestNeuralNet.test_no_grad_during_validation _________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_no_grad_during_validation(self, net_cls, module_cls, data): """Test that gradient is only calculated during training step, not validation step.""" # pylint: disable=unused-argument def check_grad(*args, loss, training, **kwargs): if training: assert loss.requires_grad else: assert not loss.requires_grad mock_cb = Mock(on_batch_end=check_grad) net = net_cls(module_cls, max_epochs=1, callbacks=[mock_cb]) > net.fit(*data) skorch/tests/test_net.py:2142: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _________________ TestNeuralNet.test_callback_on_grad_computed _________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_callback_on_grad_computed(self, net_cls, module_cls, data): module = module_cls() expected_names = set(name for name, _ in module.named_parameters()) def on_grad_computed(*args, named_parameters, **kwargs): names = set(name for name, _ in named_parameters) assert expected_names == names mock_cb = Mock(on_grad_computed=on_grad_computed) net = net_cls(module, max_epochs=1, callbacks=[mock_cb]) > net.fit(*data) skorch/tests/test_net.py:2155: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _ TestNeuralNet.test_batch_size_neg_1_uses_whole_dataset[net_kwargs0-800-200] __ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) net_kwargs = {'batch_size': -1}, expected_train_batch_size = 800 expected_valid_batch_size = 200 @pytest.mark.parametrize( 'net_kwargs,expected_train_batch_size,expected_valid_batch_size', [ ({'batch_size': -1}, 800, 200), ({'iterator_train__batch_size': -1}, 800, 128), ({'iterator_valid__batch_size': -1}, 128, 200), ] ) def test_batch_size_neg_1_uses_whole_dataset( self, net_cls, module_cls, data, net_kwargs, expected_train_batch_size, expected_valid_batch_size): train_loader_mock = Mock(side_effect=torch.utils.data.DataLoader) valid_loader_mock = Mock(side_effect=torch.utils.data.DataLoader) net = net_cls(module_cls, max_epochs=1, iterator_train=train_loader_mock, iterator_valid=valid_loader_mock, **net_kwargs) > net.fit(*data) skorch/tests/test_net.py:2189: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.7315, -0.6562], [-0.6016, -0.7939], [-0.5100, -0.9175], ..., [-0.5821, -0.8181], [-0.8258, -0.5761], [-0.6741, -0.7126]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _ TestNeuralNet.test_batch_size_neg_1_uses_whole_dataset[net_kwargs1-800-128] __ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) net_kwargs = {'iterator_train__batch_size': -1}, expected_train_batch_size = 800 expected_valid_batch_size = 128 @pytest.mark.parametrize( 'net_kwargs,expected_train_batch_size,expected_valid_batch_size', [ ({'batch_size': -1}, 800, 200), ({'iterator_train__batch_size': -1}, 800, 128), ({'iterator_valid__batch_size': -1}, 128, 200), ] ) def test_batch_size_neg_1_uses_whole_dataset( self, net_cls, module_cls, data, net_kwargs, expected_train_batch_size, expected_valid_batch_size): train_loader_mock = Mock(side_effect=torch.utils.data.DataLoader) valid_loader_mock = Mock(side_effect=torch.utils.data.DataLoader) net = net_cls(module_cls, max_epochs=1, iterator_train=train_loader_mock, iterator_valid=valid_loader_mock, **net_kwargs) > net.fit(*data) skorch/tests/test_net.py:2189: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.7315, -0.6562], [-0.6016, -0.7939], [-0.5100, -0.9175], ..., [-0.5821, -0.8181], [-0.8258, -0.5761], [-0.6741, -0.7126]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _ TestNeuralNet.test_batch_size_neg_1_uses_whole_dataset[net_kwargs2-128-200] __ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) net_kwargs = {'iterator_valid__batch_size': -1}, expected_train_batch_size = 128 expected_valid_batch_size = 200 @pytest.mark.parametrize( 'net_kwargs,expected_train_batch_size,expected_valid_batch_size', [ ({'batch_size': -1}, 800, 200), ({'iterator_train__batch_size': -1}, 800, 128), ({'iterator_valid__batch_size': -1}, 128, 200), ] ) def test_batch_size_neg_1_uses_whole_dataset( self, net_cls, module_cls, data, net_kwargs, expected_train_batch_size, expected_valid_batch_size): train_loader_mock = Mock(side_effect=torch.utils.data.DataLoader) valid_loader_mock = Mock(side_effect=torch.utils.data.DataLoader) net = net_cls(module_cls, max_epochs=1, iterator_train=train_loader_mock, iterator_valid=valid_loader_mock, **net_kwargs) > net.fit(*data) skorch/tests/test_net.py:2189: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ______________________ TestNeuralNet.test_batch_count[40] ______________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) batch_size = 40 @pytest.mark.parametrize('batch_size', [40, 100]) def test_batch_count(self, net_cls, module_cls, data, batch_size): net = net_cls(module_cls, max_epochs=1, batch_size=batch_size) X, y = data > net.fit(X, y) skorch/tests/test_net.py:2207: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.6509, -0.7373], [-0.6833, -0.7031], [-0.4543, -1.0076], [-0.7120, -0.6747], ..., -0.8190], [-0.7588, -0.6316], [-0.6586, -0.7289], [-0.8174, -0.5826]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _____________________ TestNeuralNet.test_batch_count[100] ______________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) batch_size = 100 @pytest.mark.parametrize('batch_size', [40, 100]) def test_batch_count(self, net_cls, module_cls, data, batch_size): net = net_cls(module_cls, max_epochs=1, batch_size=batch_size) X, y = data > net.fit(X, y) skorch/tests/test_net.py:2207: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.6837, -0.7027], [-0.6546, -0.7333], [-2.0587, -0.1365], [-0.9096, -0.5153], ..., -0.7196], [-0.6017, -0.7938], [-0.6385, -0.7510], [-0.9325, -0.5002]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ... 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ____________________ TestNeuralNet.test_fit_lbfgs_optimizer ____________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) @flaky(max_runs=5) def test_fit_lbfgs_optimizer(self, net_cls, module_cls, data): # need to randomize the seed, otherwise flaky always runs with # the exact same seed torch.manual_seed(int(time.time())) X, y = data net = net_cls( module_cls, optimizer=torch.optim.LBFGS, lr=1.0, batch_size=-1, ) > net.fit(X, y) skorch/tests/test_net.py:2227: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/lbfgs.py:311: in step orig_loss = closure() /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.4326, -1.0465], [-0.6473, -0.7412], [-0.3728, -1.1674], ..., [-0.6369, -0.7528], [-0.4868, -0.9535], [-0.4492, -1.0164]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ____________ TestNeuralNet.test_accumulator_that_returns_last_value ____________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_accumulator_that_returns_last_value( self, net_cls, module_cls, data): # We define an optimizer that calls the step function 3 times # and an accumulator that returns the last of those calls. We # then test that the correct values were stored. from skorch.utils import FirstStepAccumulator side_effect = [] class SGD3Calls(torch.optim.SGD): def step(self, closure=None): for _ in range(3): loss = super().step(closure) side_effect.append(float(loss)) class MyAccumulator(FirstStepAccumulator): """Accumulate all steps and return the last.""" def store_step(self, step): if self.step is None: self.step = [step] else: self.step.append(step) def get_step(self): # Losses should only ever be retrieved after storing 3 # times. assert len(self.step) == 3 return self.step[-1] X, y = data max_epochs = 2 batch_size = 100 net = net_cls( module_cls, optimizer=SGD3Calls, max_epochs=max_epochs, batch_size=batch_size, train_split=None, ) net.get_train_step_accumulator = MyAccumulator > net.fit(X, y) skorch/tests/test_net.py:2274: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) skorch/tests/test_net.py:2246: in step loss = super().step(closure) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5243, -0.8964], [-0.6978, -0.6885], [-0.7403, -0.6481], [-0.7386, -0.6497], ..., -0.6845], [-0.3758, -1.1608], [-0.7207, -0.6663], [-1.5143, -0.2484]], grad_fn=) target = tensor([0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, ... 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _____________________ TestNeuralNet.test_predefined_split ______________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) predefined_split = dataset_cls = def test_predefined_split( self, net_cls, module_cls, data, predefined_split, dataset_cls): train_loader_mock = Mock(side_effect=torch.utils.data.DataLoader) valid_loader_mock = Mock(side_effect=torch.utils.data.DataLoader) train_ds = dataset_cls(*data) valid_ds = dataset_cls(*data) net = net_cls( module_cls, max_epochs=1, iterator_train=train_loader_mock, iterator_valid=valid_loader_mock, train_split=predefined_split(valid_ds) ) > net.fit(train_ds, None) skorch/tests/test_net.py:2307: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.4030, -1.1035], [-0.6978, -0.6885], [-0.7526, -0.6370], [-0.7386, -0.6497], ..., -0.3595], [-0.6340, -0.7560], [-0.5740, -0.8284], [-0.8443, -0.5618]], grad_fn=) target = tensor([0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, ..., 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __________________ TestNeuralNet.test_predefined_split_with_y __________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) predefined_split = dataset_cls = def test_predefined_split_with_y( self, net_cls, module_cls, data, predefined_split, dataset_cls): # A change in the signature of utils._make_split in #646 led # to a bug reported in #681, namely `TypeError: _make_split() # got multiple values for argument 'valid_ds'`. This is a test # for the bug. X, y = data X_train, y_train, X_valid, y_valid = X[:800], y[:800], X[800:], y[800:] valid_ds = dataset_cls(X_valid, y_valid) net = net_cls( module_cls, max_epochs=1, train_split=predefined_split(valid_ds), ) > net.fit(X_train, y_train) skorch/tests/test_net.py:2329: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.4030, -1.1035], [-0.6978, -0.6885], [-0.7526, -0.6370], [-0.7386, -0.6497], ..., -0.3595], [-0.6340, -0.7560], [-0.5740, -0.8284], [-0.8443, -0.5618]], grad_fn=) target = tensor([0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, ..., 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _____________ TestNeuralNet.test_set_lr_at_runtime_sets_lr_pgroups _____________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_set_lr_at_runtime_sets_lr_pgroups(self, net_cls, module_cls, data): lr_pgroup_0 = 0.1 lr_pgroup_1 = 0.2 lr_pgroup_0_new = 0.3 lr_pgroup_1_new = 0.4 net = net_cls( module_cls, lr=lr_pgroup_1, max_epochs=1, optimizer__param_groups=[ ('sequential.0.*', {'lr': lr_pgroup_0}), ]) > net.fit(*data) skorch/tests/test_net.py:2364: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _____________ TestNeuralNet.test_criterion_training_set_correctly ______________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_criterion_training_set_correctly(self, net_cls, module_cls, data): # check that criterion's training attribute is set correctly X, y = data[0][:50], data[1][:50] # don't need all the data side_effect = [] class MyCriterion(nn.NLLLoss): """Criterion that records its training attribute""" def forward(self, *args, **kwargs): side_effect.append(self.training) return super().forward(*args, **kwargs) net = net_cls(module_cls, criterion=MyCriterion, max_epochs=1) > net.fit(X, y) skorch/tests/test_net.py:2391: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) skorch/tests/test_net.py:2388: in forward return super().forward(*args, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5612, -0.8452], [-0.7418, -0.6467], [-0.6890, -0.6973], [-0.5345, -0.8817], ..., -0.8130], [-0.7567, -0.6334], [-0.5596, -0.8473], [-0.7052, -0.6812]], grad_fn=) target = tensor([0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ______________ TestNeuralNet.test_criterion_is_not_a_torch_module ______________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_criterion_is_not_a_torch_module(self, net_cls, module_cls, data): X, y = data[0][:50], data[1][:50] # don't need all the data def my_criterion(): return torch.nn.functional.nll_loss net = net_cls(module_cls, criterion=my_criterion, max_epochs=1) > net.fit(X, y) # does not raise skorch/tests/test_net.py:2408: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[0.5705, 0.4295], [0.4762, 0.5238], [0.5021, 0.4979], [0.5859, 0.4141], [0.382...565, 0.4435], [0.4692, 0.5308], [0.5714, 0.4286], [0.4940, 0.5060]], grad_fn=) target = tensor([0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _________________ TestNeuralNet.test_gradient_accumulation[1] __________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) acc_steps = 1 @pytest.mark.parametrize('acc_steps', [1, 2, 3, 5, 10]) def test_gradient_accumulation(self, net_cls, module_cls, data, acc_steps): # Test if gradient accumulation technique is possible, # i.e. performing a weight update only every couple of # batches. mock_optimizer = Mock() class GradAccNet(net_cls): """Net that accumulates gradients""" def __init__(self, *args, acc_steps=acc_steps, **kwargs): super().__init__(*args, **kwargs) self.acc_steps = acc_steps def initialize(self): # This is not necessary for gradient accumulation but # only for testing purposes super().initialize() self.true_optimizer_ = self.optimizer_ mock_optimizer.step.side_effect = self.true_optimizer_.step mock_optimizer.zero_grad.side_effect = self.true_optimizer_.zero_grad self.optimizer_ = mock_optimizer def get_loss(self, *args, **kwargs): loss = super().get_loss(*args, **kwargs) # because only every nth step is optimized return loss / self.acc_steps def train_step(self, Xi, yi, **fit_params): """Perform gradient accumulation Only optimize every 2nd batch. """ # note that n_train_batches starts at 1 for each epoch n_train_batches = len(self.history[-1, 'batches']) step = self.train_step_single(Xi, yi, **fit_params) if n_train_batches % self.acc_steps == 0: self.optimizer_.step() self.optimizer_.zero_grad() return step max_epochs = 5 net = GradAccNet(module_cls, max_epochs=max_epochs) X, y = data > net.fit(X, y) skorch/tests/test_net.py:2455: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/tests/test_net.py:2445: in train_step step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/tests/test_net.py:2433: in get_loss loss = super().get_loss(*args, **kwargs) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _________________ TestNeuralNet.test_gradient_accumulation[2] __________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) acc_steps = 2 @pytest.mark.parametrize('acc_steps', [1, 2, 3, 5, 10]) def test_gradient_accumulation(self, net_cls, module_cls, data, acc_steps): # Test if gradient accumulation technique is possible, # i.e. performing a weight update only every couple of # batches. mock_optimizer = Mock() class GradAccNet(net_cls): """Net that accumulates gradients""" def __init__(self, *args, acc_steps=acc_steps, **kwargs): super().__init__(*args, **kwargs) self.acc_steps = acc_steps def initialize(self): # This is not necessary for gradient accumulation but # only for testing purposes super().initialize() self.true_optimizer_ = self.optimizer_ mock_optimizer.step.side_effect = self.true_optimizer_.step mock_optimizer.zero_grad.side_effect = self.true_optimizer_.zero_grad self.optimizer_ = mock_optimizer def get_loss(self, *args, **kwargs): loss = super().get_loss(*args, **kwargs) # because only every nth step is optimized return loss / self.acc_steps def train_step(self, Xi, yi, **fit_params): """Perform gradient accumulation Only optimize every 2nd batch. """ # note that n_train_batches starts at 1 for each epoch n_train_batches = len(self.history[-1, 'batches']) step = self.train_step_single(Xi, yi, **fit_params) if n_train_batches % self.acc_steps == 0: self.optimizer_.step() self.optimizer_.zero_grad() return step max_epochs = 5 net = GradAccNet(module_cls, max_epochs=max_epochs) X, y = data > net.fit(X, y) skorch/tests/test_net.py:2455: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/tests/test_net.py:2445: in train_step step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/tests/test_net.py:2433: in get_loss loss = super().get_loss(*args, **kwargs) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _________________ TestNeuralNet.test_gradient_accumulation[3] __________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) acc_steps = 3 @pytest.mark.parametrize('acc_steps', [1, 2, 3, 5, 10]) def test_gradient_accumulation(self, net_cls, module_cls, data, acc_steps): # Test if gradient accumulation technique is possible, # i.e. performing a weight update only every couple of # batches. mock_optimizer = Mock() class GradAccNet(net_cls): """Net that accumulates gradients""" def __init__(self, *args, acc_steps=acc_steps, **kwargs): super().__init__(*args, **kwargs) self.acc_steps = acc_steps def initialize(self): # This is not necessary for gradient accumulation but # only for testing purposes super().initialize() self.true_optimizer_ = self.optimizer_ mock_optimizer.step.side_effect = self.true_optimizer_.step mock_optimizer.zero_grad.side_effect = self.true_optimizer_.zero_grad self.optimizer_ = mock_optimizer def get_loss(self, *args, **kwargs): loss = super().get_loss(*args, **kwargs) # because only every nth step is optimized return loss / self.acc_steps def train_step(self, Xi, yi, **fit_params): """Perform gradient accumulation Only optimize every 2nd batch. """ # note that n_train_batches starts at 1 for each epoch n_train_batches = len(self.history[-1, 'batches']) step = self.train_step_single(Xi, yi, **fit_params) if n_train_batches % self.acc_steps == 0: self.optimizer_.step() self.optimizer_.zero_grad() return step max_epochs = 5 net = GradAccNet(module_cls, max_epochs=max_epochs) X, y = data > net.fit(X, y) skorch/tests/test_net.py:2455: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/tests/test_net.py:2445: in train_step step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/tests/test_net.py:2433: in get_loss loss = super().get_loss(*args, **kwargs) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _________________ TestNeuralNet.test_gradient_accumulation[5] __________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) acc_steps = 5 @pytest.mark.parametrize('acc_steps', [1, 2, 3, 5, 10]) def test_gradient_accumulation(self, net_cls, module_cls, data, acc_steps): # Test if gradient accumulation technique is possible, # i.e. performing a weight update only every couple of # batches. mock_optimizer = Mock() class GradAccNet(net_cls): """Net that accumulates gradients""" def __init__(self, *args, acc_steps=acc_steps, **kwargs): super().__init__(*args, **kwargs) self.acc_steps = acc_steps def initialize(self): # This is not necessary for gradient accumulation but # only for testing purposes super().initialize() self.true_optimizer_ = self.optimizer_ mock_optimizer.step.side_effect = self.true_optimizer_.step mock_optimizer.zero_grad.side_effect = self.true_optimizer_.zero_grad self.optimizer_ = mock_optimizer def get_loss(self, *args, **kwargs): loss = super().get_loss(*args, **kwargs) # because only every nth step is optimized return loss / self.acc_steps def train_step(self, Xi, yi, **fit_params): """Perform gradient accumulation Only optimize every 2nd batch. """ # note that n_train_batches starts at 1 for each epoch n_train_batches = len(self.history[-1, 'batches']) step = self.train_step_single(Xi, yi, **fit_params) if n_train_batches % self.acc_steps == 0: self.optimizer_.step() self.optimizer_.zero_grad() return step max_epochs = 5 net = GradAccNet(module_cls, max_epochs=max_epochs) X, y = data > net.fit(X, y) skorch/tests/test_net.py:2455: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/tests/test_net.py:2445: in train_step step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/tests/test_net.py:2433: in get_loss loss = super().get_loss(*args, **kwargs) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _________________ TestNeuralNet.test_gradient_accumulation[10] _________________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) acc_steps = 10 @pytest.mark.parametrize('acc_steps', [1, 2, 3, 5, 10]) def test_gradient_accumulation(self, net_cls, module_cls, data, acc_steps): # Test if gradient accumulation technique is possible, # i.e. performing a weight update only every couple of # batches. mock_optimizer = Mock() class GradAccNet(net_cls): """Net that accumulates gradients""" def __init__(self, *args, acc_steps=acc_steps, **kwargs): super().__init__(*args, **kwargs) self.acc_steps = acc_steps def initialize(self): # This is not necessary for gradient accumulation but # only for testing purposes super().initialize() self.true_optimizer_ = self.optimizer_ mock_optimizer.step.side_effect = self.true_optimizer_.step mock_optimizer.zero_grad.side_effect = self.true_optimizer_.zero_grad self.optimizer_ = mock_optimizer def get_loss(self, *args, **kwargs): loss = super().get_loss(*args, **kwargs) # because only every nth step is optimized return loss / self.acc_steps def train_step(self, Xi, yi, **fit_params): """Perform gradient accumulation Only optimize every 2nd batch. """ # note that n_train_batches starts at 1 for each epoch n_train_batches = len(self.history[-1, 'batches']) step = self.train_step_single(Xi, yi, **fit_params) if n_train_batches % self.acc_steps == 0: self.optimizer_.step() self.optimizer_.zero_grad() return step max_epochs = 5 net = GradAccNet(module_cls, max_epochs=max_epochs) X, y = data > net.fit(X, y) skorch/tests/test_net.py:2455: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/tests/test_net.py:2445: in train_step step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/tests/test_net.py:2433: in get_loss loss = super().get_loss(*args, **kwargs) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError _________ TestNeuralNet.test_predict_nonlinearity_called_with_predict __________ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_predict_nonlinearity_called_with_predict( self, net_cls, module_cls, data): side_effect = [] def nonlin(X): side_effect.append(X) return np.zeros_like(X) X, y = data[0][:200], data[1][:200] net = net_cls( module_cls, max_epochs=1, predict_nonlinearity=nonlin).initialize() # don't want callbacks to trigger side effects net.callbacks_ = [] > net.partial_fit(X, y) skorch/tests/test_net.py:2676: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.6047, -0.7901], [-0.5127, -0.9134], [-0.8694, -0.5433], [-0.7471, -0.6419], ..., -0.3040], [-1.2248, -0.3479], [-0.6154, -0.7774], [-0.8882, -0.5300]], grad_fn=) target = tensor([1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ______ TestNeuralNet.test_predict_nonlinearity_called_with_predict_proba _______ self = net_cls = module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_predict_nonlinearity_called_with_predict_proba( self, net_cls, module_cls, data): side_effect = [] def nonlin(X): side_effect.append(X) return np.zeros_like(X) X, y = data[0][:200], data[1][:200] net = net_cls( module_cls, max_epochs=1, predict_nonlinearity=nonlin).initialize() net.callbacks_ = [] # don't want callbacks to trigger side effects > net.partial_fit(X, y) skorch/tests/test_net.py:2702: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.6047, -0.7901], [-0.5127, -0.9134], [-0.8694, -0.5433], [-0.7471, -0.6419], ..., -0.3040], [-1.2248, -0.3479], [-0.6154, -0.7774], [-0.8882, -0.5300]], grad_fn=) target = tensor([1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ________________ TestNetSparseInput.test_fit_sparse_csr_learns _________________ self = model = Pipeline(steps=[('tfidf', TfidfVectorizer(dtype=, ... inplace=False) (6): Linear(in_features=10, out_features=2, bias=True) (7): Softmax(dim=-1) ) ), ))]) X = array(['"""Tests for net.py\n', '\n', 'Although NeuralNetClassifier is used in tests, test only functionality\n..._end = net.history[-1]['train_loss']\n", '\n', ' assert score_start > 1.25 * score_end\n'], dtype=' model.fit(X, y) skorch/tests/test_net.py:2784: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/sklearn/pipeline.py:335: in fit self._final_estimator.fit(Xt, y, **fit_params_last_step) skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.6636, -0.7236], [-0.6121, -0.7814], [-0.7859, -0.6083], [-0.6741, -0.7126], ..., -0.6540], [-0.6406, -0.7486], [-0.6080, -0.7862], [-0.6680, -0.7190]], grad_fn=) target = tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ..., 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError __________ TestLossScoring.test_scored_net_with_reduction_none[mean] ___________ self = scored_net_cls = .ScoredNet'> module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) reduction = 'mean' data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_scored_net_with_reduction_none( self, scored_net_cls, module_cls, reduction, data ): X, y = data > net = scored_net_cls( module_cls, lr=0.01, criterion__reduction=reduction ).fit(X, y) skorch/tests/test_scoring.py:101: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'mean' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError ___________ TestLossScoring.test_scored_net_with_reduction_none[sum] ___________ self = scored_net_cls = .ScoredNet'> module_cls = functools.partial(, output_nonlin=Softmax(dim=-1), input_units=20, hidden_units=10, num_hidden=2, dropout=0.5) reduction = 'sum' data = (array([[-0.9658346 , -2.1890705 , 0.16985609, ..., -0.89645284, 0.3759244 , -1.0849651 ], [-0.454767..., 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0])) def test_scored_net_with_reduction_none( self, scored_net_cls, module_cls, reduction, data ): X, y = data > net = scored_net_cls( module_cls, lr=0.01, criterion__reduction=reduction ).fit(X, y) skorch/tests/test_scoring.py:101: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skorch/classifier.py:142: in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) skorch/net.py:903: in fit self.partial_fit(X, y, **fit_params) skorch/net.py:862: in partial_fit self.fit_loop(X, y, **fit_params) skorch/net.py:775: in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", skorch/net.py:812: in run_single_epoch step = step_fn(Xi, yi, **fit_params) skorch/net.py:709: in train_step self.optimizer_.step(step_fn) /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:26: in decorate_context return func(*args, **kwargs) /usr/lib/python3/dist-packages/torch/optim/sgd.py:86: in step loss = closure() skorch/net.py:705: in step_fn step = self.train_step_single(Xi, yi, **fit_params) skorch/net.py:646: in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) skorch/classifier.py:127: in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) skorch/net.py:1196: in get_loss return self.criterion_(y_pred, y_true) /usr/lib/python3/dist-packages/torch/nn/modules/module.py:727: in _call_impl result = self.forward(*input, **kwargs) /usr/lib/python3/dist-packages/torch/nn/modules/loss.py:213: in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = tensor([[-0.5952, -0.8017], [-0.6448, -0.7440], [-2.2988, -0.1058], [-0.9461, -0.4914], ..., -0.6582], [-0.7866, -0.6077], [-0.5596, -0.8473], [-1.1561, -0.3779]], grad_fn=) target = tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, ..., 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.int32) weight = None, size_average = None, ignore_index = -100, reduce = None reduction = 'sum' def nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean'): # type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor r"""The negative log likelihood loss. See :class:`~torch.nn.NLLLoss` for details. Args: input: :math:`(N, C)` where `C = number of classes` or :math:`(N, C, H, W)` in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` in the case of K-dimensional loss. target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for K-dimensional loss. weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size `C` size_average (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:`size_average` is set to ``False``, the losses are instead summed for each minibatch. Ignored when reduce is ``False``. Default: ``True`` ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When :attr:`size_average` is ``True``, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per batch element instead and ignores :attr:`size_average`. Default: ``True`` reduction (string, optional): Specifies the reduction to apply to the output: ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, ``'mean'``: the sum of the output will be divided by the number of elements in the output, ``'sum'``: the output will be summed. Note: :attr:`size_average` and :attr:`reduce` are in the process of being deprecated, and in the meantime, specifying either of those two args will override :attr:`reduction`. Default: ``'mean'`` Example:: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() """ if not torch.jit.is_scripting(): tens_ops = (input, target) if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): return handle_torch_function( nll_loss, tens_ops, input, target, weight=weight, size_average=size_average, ignore_index=ignore_index, reduce=reduce, reduction=reduction) if size_average is not None or reduce is not None: reduction = _Reduction.legacy_get_string(size_average, reduce) dim = input.dim() if dim < 2: raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' .format(input.size(0), target.size(0))) if dim == 2: > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) E RuntimeError: expected scalar type Long but found Int /usr/lib/python3/dist-packages/torch/nn/functional.py:2264: RuntimeError =============================== warnings summary =============================== .pybuild/cpython3_3.9/build/skorch/tests/test_helper.py::TestSliceDict::test_init_inconsistent_shapes /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/helper.py:6: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working from collections import Sequence .pybuild/cpython3_3.9/build/skorch/tests/test_helper.py: 26 warnings .pybuild/cpython3_3.9/build/skorch/tests/test_net.py: 24 warnings /usr/lib/python3/dist-packages/sklearn/model_selection/_validation.py:548: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/sklearn/model_selection/_validation.py", line 531, in _fit_and_score estimator.fit(X_train, y_train, **fit_params) File "/build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/classifier.py", line 142, in fit return super(NeuralNetClassifier, self).fit(X, y, **fit_params) File "/build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/net.py", line 903, in fit self.partial_fit(X, y, **fit_params) File "/build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/net.py", line 862, in partial_fit self.fit_loop(X, y, **fit_params) File "/build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/net.py", line 775, in fit_loop self.run_single_epoch(dataset_train, training=True, prefix="train", File "/build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/net.py", line 812, in run_single_epoch step = step_fn(Xi, yi, **fit_params) File "/build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/net.py", line 709, in train_step self.optimizer_.step(step_fn) File "/usr/lib/python3/dist-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "/usr/lib/python3/dist-packages/torch/optim/sgd.py", line 86, in step loss = closure() File "/build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/net.py", line 705, in step_fn step = self.train_step_single(Xi, yi, **fit_params) File "/build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/net.py", line 646, in train_step_single loss = self.get_loss(y_pred, yi, X=Xi, training=True) File "/build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/classifier.py", line 127, in get_loss return super().get_loss(y_pred, y_true, *args, **kwargs) File "/build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/net.py", line 1196, in get_loss return self.criterion_(y_pred, y_true) File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/lib/python3/dist-packages/torch/nn/modules/loss.py", line 213, in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) File "/usr/lib/python3/dist-packages/torch/nn/functional.py", line 2264, in nll_loss ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: expected scalar type Long but found Int warnings.warn("Estimator fit failed. The score on this train-test" .pybuild/cpython3_3.9/build/skorch/tests/test_helper.py::TestSliceDict::test_grid_search_with_dict_works .pybuild/cpython3_3.9/build/skorch/tests/test_helper.py::TestSliceDataset::test_grid_search_with_slds_works .pybuild/cpython3_3.9/build/skorch/tests/test_helper.py::TestSliceDataset::test_grid_search_with_slds_and_internal_split_works .pybuild/cpython3_3.9/build/skorch/tests/test_helper.py::TestSliceDataset::test_grid_search_with_slds_X_and_slds_y .pybuild/cpython3_3.9/build/skorch/tests/test_net.py::TestNeuralNet::test_grid_search_works /usr/lib/python3/dist-packages/sklearn/model_selection/_search.py:847: FutureWarning: The parameter 'iid' is deprecated in 0.22 and will be removed in 0.24. warnings.warn( .pybuild/cpython3_3.9/build/skorch/tests/test_net.py::TestNeuralNet::test_net_variable_label_lengths /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/test_net.py:2107: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray y = np.array([[1], [1, 0, 1], [1, 1], [1, 1, 0], [1, 0]]) .pybuild/cpython3_3.9/build/skorch/tests/test_net.py::TestNeuralNet::test_net_variable_label_lengths /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/test_net.py:2109: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray y = np.array([np.array(n, dtype='float32')[:, np.newaxis] for n in y]) -- Docs: https://docs.pytest.org/en/stable/warnings.html ----------- coverage: platform linux, python 3.9.2-final-0 ----------- Name Stmts Miss Cover Missing ------------------------------------------------------------------ skorch/__init__.py 26 5 81% 22, 31-32, 39-41 skorch/callbacks/__init__.py 7 0 100% skorch/callbacks/base.py 18 0 100% skorch/callbacks/logging.py 236 115 51% 173-176, 179-186, 189-195, 199-204, 207-208, 263-265, 268-273, 277, 281-290, 365, 378, 381, 417, 430, 520-522, 525-528, 531, 534-536, 539-542, 545, 549-555, 559-560, 565-581, 584, 589-591, 601-603, 666-669, 672-679, 682, 704-712, 720-726, 729-730 skorch/callbacks/lr_scheduler.py 111 79 29% 20-22, 31-38, 77-81, 101-112, 115-128, 131-133, 140-143, 146-151, 156-177, 180-187, 193-198, 248-252, 255, 259-271 skorch/callbacks/regularization.py 11 5 55% 37-38, 41-44 skorch/callbacks/scoring.py 208 60 71% 16, 41-60, 73-74, 109, 140, 144, 146-151, 153, 172-173, 184, 186, 243-255, 258-266, 270-283, 428-441, 452, 519, 522 skorch/callbacks/training.py 232 146 37% 166-167, 178-202, 223-243, 250-252, 257-266, 270-273, 279-288, 306, 311-312, 359-366, 370-374, 377-388, 391-393, 397-406, 410-411, 493-496, 499-512, 515, 518-524, 527-529, 532, 535-540, 552-554, 560-562, 580-581, 612, 615-616, 620-623, 710-719, 722, 726-734, 737-738, 741 skorch/classifier.py 95 2 98% 300, 310 skorch/cli.py 143 117 18% 41, 46-57, 77-85, 89-99, 114-135, 150-157, 161-167, 178-195, 199-205, 209-217, 221-227, 232-244, 260-269, 316-339 skorch/dataset.py 141 7 95% 46, 169, 303-304, 311, 328, 342 skorch/exceptions.py 4 0 100% skorch/helper.py 175 6 97% 77, 116, 119, 127-131 skorch/history.py 67 1 99% 136 skorch/net.py 619 29 95% 291, 523, 527, 864, 1018, 1484, 1523, 1588, 1635, 1674, 1677, 1813, 1815-1818, 1876-1877, 1898, 1908-1912, 1974-1978, 1990-1991, 1996 skorch/regressor.py 29 0 100% skorch/scoring.py 27 21 22% 54, 60-84 skorch/setter.py 28 0 100% skorch/tests/__init__.py 0 0 100% skorch/tests/conftest.py 103 18 83% 45-46, 113-117, 123-128, 133-134, 149, 158, 168-169, 176 skorch/tests/test_classifier.py 218 21 90% 58, 61-67, 70-72, 84-87, 99-101, 108, 112, 151-153 skorch/tests/test_cli.py 164 111 32% 27-28, 39-40, 43-46, 50-51, 54-68, 72-73, 77, 81, 90-91, 94-99, 102-112, 116-117, 180-181, 185-186, 189-201, 204-217, 220-234, 238-253, 257-258, 262-263, 266-273, 276-292, 295-305, 308-319 skorch/tests/test_dataset.py 583 17 97% 357-368, 410-411, 463-464, 616-617, 645, 669, 679, 823 skorch/tests/test_helper.py 394 5 99% 179, 674-678 skorch/tests/test_history.py 139 0 100% skorch/tests/test_net.py 1591 494 69% 116-125, 137-171, 268, 304-305, 308-317, 320-329, 333-336, 341-351, 354-365, 368-380, 385-397, 402-404, 434-458, 473-475, 478-480, 483-485, 488-490, 493-497, 500-504, 507-511, 514-518, 522-537, 541-554, 562, 567-570, 575-615, 620-642, 665-684, 698-700, 706, 735-759, 827-837, 846-853, 862-866, 877-881, 885-896, 903-916, 929-931, 934, 937-940, 943, 949-964, 997-1009, 1107-1109, 1146-1157, 1255-1257, 1270, 1305-1308, 1312-1321, 1357-1359, 1363-1373, 1396-1397, 1404-1408, 1414-1424, 1465-1466, 1474-1482, 1487-1495, 1506-1514, 1518-1524, 1542, 1635-1656, 1675-1681, 1702-1708, 1735-1739, 1765, 1779-1780, 1820-1821, 1938-1944, 2005, 2019-2026, 2135-2138, 2150-2151, 2191-2200, 2209-2213, 2229-2232, 2247, 2252-2255, 2260-2261, 2277-2285, 2309-2313, 2332-2334, 2337-2341, 2344-2349, 2369-2376, 2395-2399, 2435, 2447-2450, 2457-2463, 2667-2668, 2677-2687, 2693-2694, 2703-2713, 2785-2789, 2793-2799 skorch/tests/test_regressor.py 73 0 100% skorch/tests/test_scoring.py 83 24 71% 74-76, 81-83, 86-88, 91-95, 104-109, 114-117 skorch/tests/test_setter.py 46 0 100% skorch/tests/test_toy.py 68 0 100% skorch/tests/test_utils.py 484 20 96% 21-32, 49, 86-105, 454, 457, 603-604, 634 skorch/toy.py 38 0 100% skorch/utils.py 238 19 92% 31, 91, 93, 103, 123, 132, 178-179, 214, 459, 464, 503, 510, 518, 525-530, 550, 656, 664 ------------------------------------------------------------------ TOTAL 6399 1322 79% ===Flaky Test Report=== test_fit_lbfgs_optimizer failed (4 runs remaining out of 5). expected scalar type Long but found Int [, , , , , , , , , , , , , , , , ] test_fit_lbfgs_optimizer failed (3 runs remaining out of 5). expected scalar type Long but found Int [, , , , , , , , , , , , , , , , ] test_fit_lbfgs_optimizer failed (2 runs remaining out of 5). expected scalar type Long but found Int [, , , , , , , , , , , , , , , , ] test_fit_lbfgs_optimizer failed (1 runs remaining out of 5). expected scalar type Long but found Int [, , , , , , , , , , , , , , , , ] test_fit_lbfgs_optimizer failed; it passed 0 out of the required 1 times. expected scalar type Long but found Int [, , , , , , , , , , , , , , , , ] ===End Flaky Test Report=== =========================== short test summary info ============================ FAILED skorch/tests/test_classifier.py::TestNeuralNet::test_takes_log_with_nllloss FAILED skorch/tests/test_classifier.py::TestNeuralNet::test_takes_no_log_without_nllloss FAILED skorch/tests/test_classifier.py::TestNeuralNet::test_high_learning_rate FAILED skorch/tests/test_classifier.py::TestNeuralNet::test_binary_classes_set_by_default FAILED skorch/tests/test_dataset.py::TestNetWithDict::test_fit_predict_proba FAILED skorch/tests/test_dataset.py::TestNetWithList::test_fit_predict_proba FAILED skorch/tests/test_dataset.py::TestNetWithPandas::test_fit_predict_proba FAILED skorch/tests/test_helper.py::TestSliceDict::test_grid_search_with_dict_works FAILED skorch/tests/test_helper.py::TestSliceDataset::test_fit_with_slds_works FAILED skorch/tests/test_helper.py::TestSliceDataset::test_fit_with_slds_without_valid_works FAILED skorch/tests/test_helper.py::TestSliceDataset::test_grid_search_with_slds_and_internal_split_works FAILED skorch/tests/test_helper.py::TestDataFrameTransformer::test_fit_transform_defaults FAILED skorch/tests/test_helper.py::TestDataFrameTransformer::test_fit_and_transform_defaults FAILED skorch/tests/test_helper.py::TestDataFrameTransformer::test_fit_transform_defaults_two_categoricals FAILED skorch/tests/test_helper.py::TestDataFrameTransformer::test_fit_transform_int_as_categorical FAILED skorch/tests/test_helper.py::TestDataFrameTransformer::test_fit_transform_no_X FAILED skorch/tests/test_helper.py::TestDataFrameTransformer::test_fit_and_predict_with_pipeline FAILED skorch/tests/test_net.py::TestNeuralNet::test_train_net_after_copy[pickle] FAILED skorch/tests/test_net.py::TestNeuralNet::test_train_net_after_copy[copy.deepcopy] FAILED skorch/tests/test_net.py::TestNeuralNet::test_net_learns - RuntimeErro... FAILED skorch/tests/test_net.py::TestNeuralNet::test_save_and_load_from_checkpoint[True] FAILED skorch/tests/test_net.py::TestNeuralNet::test_save_and_load_from_checkpoint[False] FAILED skorch/tests/test_net.py::TestNeuralNet::test_checkpoint_with_prefix_and_dirname FAILED skorch/tests/test_net.py::TestNeuralNet::test_save_and_load_from_checkpoint_formatting FAILED skorch/tests/test_net.py::TestNeuralNet::test_set_params_works - Runti... FAILED skorch/tests/test_net.py::TestNeuralNet::test_changing_model_reinitializes_optimizer FAILED skorch/tests/test_net.py::TestNeuralNet::test_module_params_in_init - ... FAILED skorch/tests/test_net.py::TestNeuralNet::test_in_sklearn_pipeline - Ru... FAILED skorch/tests/test_net.py::TestNeuralNet::test_grid_search_works - Runt... FAILED skorch/tests/test_net.py::TestNeuralNet::test_net_no_valid - RuntimeEr... FAILED skorch/tests/test_net.py::TestNeuralNet::test_with_initialized_module FAILED skorch/tests/test_net.py::TestNeuralNet::test_with_initialized_module_other_params FAILED skorch/tests/test_net.py::TestNeuralNet::test_with_initialized_module_non_default FAILED skorch/tests/test_net.py::TestNeuralNet::test_message_fit_with_initialized_net FAILED skorch/tests/test_net.py::TestNeuralNet::test_with_initialized_sequential FAILED skorch/tests/test_net.py::TestNeuralNet::test_call_fit_twice_retrains FAILED skorch/tests/test_net.py::TestNeuralNet::test_call_fit_twice_warmstart FAILED skorch/tests/test_net.py::TestNeuralNet::test_partial_fit_first_call FAILED skorch/tests/test_net.py::TestNeuralNet::test_call_partial_fit_after_fit FAILED skorch/tests/test_net.py::TestNeuralNet::test_net_initialized_with_custom_dataset_args FAILED skorch/tests/test_net.py::TestNeuralNet::test_net_initialized_with_initalized_dataset FAILED skorch/tests/test_net.py::TestNeuralNet::test_net_initialized_with_partialed_dataset FAILED skorch/tests/test_net.py::TestNeuralNet::test_repr_fitted_works - Runt... FAILED skorch/tests/test_net.py::TestNeuralNet::test_fit_params_passed_to_module FAILED skorch/tests/test_net.py::TestNeuralNet::test_fit_params_passed_to_module_in_pipeline FAILED skorch/tests/test_net.py::TestNeuralNet::test_fit_params_passed_to_train_split FAILED skorch/tests/test_net.py::TestNeuralNet::test_data_dict_and_fit_params FAILED skorch/tests/test_net.py::TestNeuralNet::test_fit_with_dataset - Runti... FAILED skorch/tests/test_net.py::TestNeuralNet::test_fit_with_dataset_X_y_inaccessible_does_not_raise FAILED skorch/tests/test_net.py::TestNeuralNet::test_fit_with_dataset_without_explicit_y FAILED skorch/tests/test_net.py::TestNeuralNet::test_setting_callback_to_none_possible FAILED skorch/tests/test_net.py::TestNeuralNet::test_no_grad_during_validation FAILED skorch/tests/test_net.py::TestNeuralNet::test_callback_on_grad_computed FAILED skorch/tests/test_net.py::TestNeuralNet::test_batch_size_neg_1_uses_whole_dataset[net_kwargs0-800-200] FAILED skorch/tests/test_net.py::TestNeuralNet::test_batch_size_neg_1_uses_whole_dataset[net_kwargs1-800-128] FAILED skorch/tests/test_net.py::TestNeuralNet::test_batch_size_neg_1_uses_whole_dataset[net_kwargs2-128-200] FAILED skorch/tests/test_net.py::TestNeuralNet::test_batch_count[40] - Runtim... FAILED skorch/tests/test_net.py::TestNeuralNet::test_batch_count[100] - Runti... FAILED skorch/tests/test_net.py::TestNeuralNet::test_fit_lbfgs_optimizer - Ru... FAILED skorch/tests/test_net.py::TestNeuralNet::test_accumulator_that_returns_last_value FAILED skorch/tests/test_net.py::TestNeuralNet::test_predefined_split - Runti... FAILED skorch/tests/test_net.py::TestNeuralNet::test_predefined_split_with_y FAILED skorch/tests/test_net.py::TestNeuralNet::test_set_lr_at_runtime_sets_lr_pgroups FAILED skorch/tests/test_net.py::TestNeuralNet::test_criterion_training_set_correctly FAILED skorch/tests/test_net.py::TestNeuralNet::test_criterion_is_not_a_torch_module FAILED skorch/tests/test_net.py::TestNeuralNet::test_gradient_accumulation[1] FAILED skorch/tests/test_net.py::TestNeuralNet::test_gradient_accumulation[2] FAILED skorch/tests/test_net.py::TestNeuralNet::test_gradient_accumulation[3] FAILED skorch/tests/test_net.py::TestNeuralNet::test_gradient_accumulation[5] FAILED skorch/tests/test_net.py::TestNeuralNet::test_gradient_accumulation[10] FAILED skorch/tests/test_net.py::TestNeuralNet::test_predict_nonlinearity_called_with_predict FAILED skorch/tests/test_net.py::TestNeuralNet::test_predict_nonlinearity_called_with_predict_proba FAILED skorch/tests/test_net.py::TestNetSparseInput::test_fit_sparse_csr_learns FAILED skorch/tests/test_scoring.py::TestLossScoring::test_scored_net_with_reduction_none[mean] FAILED skorch/tests/test_scoring.py::TestLossScoring::test_scored_net_with_reduction_none[sum] ERROR skorch/tests/test_classifier.py::TestNeuralNet::test_clone - RuntimeErr... ERROR skorch/tests/test_classifier.py::TestNeuralNet::test_predict_and_predict_proba ERROR skorch/tests/test_classifier.py::TestNeuralNet::test_score - RuntimeErr... ERROR skorch/tests/test_classifier.py::TestNeuralNet::test_with_calibrated_classifier_cv ERROR skorch/tests/test_net.py::TestNeuralNet::test_fit - RuntimeError: expec... ERROR skorch/tests/test_net.py::TestNeuralNet::test_forward - RuntimeError: e... ERROR skorch/tests/test_net.py::TestNeuralNet::test_forward_device_cpu - Runt... ERROR skorch/tests/test_net.py::TestNeuralNet::test_dropout - RuntimeError: e... ERROR skorch/tests/test_net.py::TestNeuralNet::test_pickle_save_load - Runtim... ERROR skorch/tests/test_net.py::TestNeuralNet::test_save_params_invalid_argument_name_raises ERROR skorch/tests/test_net.py::TestNeuralNet::test_load_params_invalid_argument_name_raises ERROR skorch/tests/test_net.py::TestNeuralNet::test_save_params_with_f_params_and_f_module_raises ERROR skorch/tests/test_net.py::TestNeuralNet::test_load_params_with_f_params_and_f_module_raises ERROR skorch/tests/test_net.py::TestNeuralNet::test_save_params_no_state_dict_raises ERROR skorch/tests/test_net.py::TestNeuralNet::test_load_params_no_state_dict_raises ERROR skorch/tests/test_net.py::TestNeuralNet::test_save_params_unknown_attribute_raises ERROR skorch/tests/test_net.py::TestNeuralNet::test_load_params_unknown_attribute_raises ERROR skorch/tests/test_net.py::TestNeuralNet::test_save_load_state_dict_file ERROR skorch/tests/test_net.py::TestNeuralNet::test_save_load_state_dict_str ERROR skorch/tests/test_net.py::TestNeuralNet::test_save_load_state_dict_file_with_history_optimizer_criterion ERROR skorch/tests/test_net.py::TestNeuralNet::test_save_load_state_dict_str_with_history_optimizer ERROR skorch/tests/test_net.py::TestNeuralNet::test_save_params_with_history_file_obj ERROR skorch/tests/test_net.py::TestNeuralNet::test_save_params_with_history_file_path[str] ERROR skorch/tests/test_net.py::TestNeuralNet::test_save_params_with_history_file_path[Path] ERROR skorch/tests/test_net.py::TestNeuralNet::test_callback_is_called[on_train_begin-1] ERROR skorch/tests/test_net.py::TestNeuralNet::test_callback_is_called[on_train_end-1] ERROR skorch/tests/test_net.py::TestNeuralNet::test_callback_is_called[on_epoch_begin-10] ERROR skorch/tests/test_net.py::TestNeuralNet::test_callback_is_called[on_epoch_end-10] ERROR skorch/tests/test_net.py::TestNeuralNet::test_callback_is_called[on_batch_begin-90] ERROR skorch/tests/test_net.py::TestNeuralNet::test_callback_is_called[on_batch_end-90] ERROR skorch/tests/test_net.py::TestNeuralNet::test_history_correct_shape - R... ERROR skorch/tests/test_net.py::TestNeuralNet::test_history_default_keys - Ru... ERROR skorch/tests/test_net.py::TestNeuralNet::test_history_is_filled - Runti... ERROR skorch/tests/test_net.py::TestNeuralNet::test_get_params_no_learned_params ERROR skorch/tests/test_net.py::TestNeuralNet::test_clone_results_in_uninitialized_net ERROR skorch/tests/test_net.py::TestNeuralNet::test_set_lr_at_runtime_doesnt_reinitialize ERROR skorch/tests/test_net.py::TestNeuralNet::test_set_lr_at_runtime_sets_lr ERROR skorch/tests/test_net.py::TestNeuralNet::test_set_lr_at_runtime_sets_lr_via_pgroup_0 ERROR skorch/tests/test_scoring.py::TestLossScoring::test_nonnull_sample_weight_raises[mean] ERROR skorch/tests/test_scoring.py::TestLossScoring::test_scored_net_output_type[mean] ERROR skorch/tests/test_scoring.py::TestLossScoring::test_score_on_net_fit[mean] ERROR skorch/tests/test_scoring.py::TestLossScoring::test_scored_net_matches_criterion_value[mean] ERROR skorch/tests/test_scoring.py::TestLossScoring::test_score_unknown_reduction_raises[mean] ERROR skorch/tests/test_scoring.py::TestLossScoring::test_nonnull_sample_weight_raises[sum] ERROR skorch/tests/test_scoring.py::TestLossScoring::test_scored_net_output_type[sum] ERROR skorch/tests/test_scoring.py::TestLossScoring::test_score_on_net_fit[sum] ERROR skorch/tests/test_scoring.py::TestLossScoring::test_scored_net_matches_criterion_value[sum] ERROR skorch/tests/test_scoring.py::TestLossScoring::test_score_unknown_reduction_raises[sum] = 75 failed, 494 passed, 76 skipped, 58 warnings, 48 errors in 371.46s (0:06:11) = E: pybuild pybuild:353: test: plugin distutils failed with: exit code=1: cd /build/skorch-0.9.0/.pybuild/cpython3_3.9/build; python3.9 -m pytest -v dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p 3.9 returned exit code 13 make[1]: [debian/rules:11: override_dh_auto_test] Error 25 (ignored) make[1]: Leaving directory '/build/skorch-0.9.0' create-stamp debian/debhelper-build-stamp fakeroot debian/rules binary dh binary -Spybuild --with python3 dh_testroot -O-Spybuild dh_prep -O-Spybuild dh_auto_install -O-Spybuild I: pybuild base:232: /usr/bin/python3 setup.py install --root /build/skorch-0.9.0/debian/python3-skorch running install running build running build_py running egg_info writing skorch.egg-info/PKG-INFO writing dependency_links to skorch.egg-info/dependency_links.txt writing requirements to skorch.egg-info/requires.txt writing top-level names to skorch.egg-info/top_level.txt reading manifest file 'skorch.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'skorch.egg-info/SOURCES.txt' running install_lib creating /build/skorch-0.9.0/debian/python3-skorch/usr creating /build/skorch-0.9.0/debian/python3-skorch/usr/lib creating /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9 creating /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/.coverage -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages creating /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/regressor.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/utils.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/setter.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/net.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/scoring.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/__init__.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/helper.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/exceptions.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/toy.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch creating /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/test_classifier.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/test_cli.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/test_history.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/test_scoring.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/test_toy.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/conftest.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/test_regressor.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/test_setter.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/__init__.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/test_net.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/test_helper.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/test_dataset.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests creating /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/__pycache__/test_helper.cpython-39-pytest-6.0.2.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/__pycache__/test_scoring.cpython-39-pytest-6.0.2.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/__pycache__/test_net.cpython-39-pytest-6.0.2.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/__pycache__/test_toy.cpython-39-pytest-6.0.2.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/__pycache__/__init__.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/__pycache__/test_setter.cpython-39-pytest-6.0.2.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/__pycache__/conftest.cpython-39-pytest-6.0.2.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/__pycache__/test_cli.cpython-39-pytest-6.0.2.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/__pycache__/test_regressor.cpython-39-pytest-6.0.2.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/__pycache__/test_dataset.cpython-39-pytest-6.0.2.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/__pycache__/test_utils.cpython-39-pytest-6.0.2.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/__pycache__/test_history.cpython-39-pytest-6.0.2.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/__pycache__/test_classifier.cpython-39-pytest-6.0.2.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/tests/test_utils.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/history.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch creating /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks/base.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks/scoring.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks/__init__.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks/lr_scheduler.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks/regularization.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks/training.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks/logging.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks creating /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks/__pycache__/training.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks/__pycache__/lr_scheduler.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks/__pycache__/scoring.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks/__pycache__/regularization.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks/__pycache__/__init__.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks/__pycache__/base.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/callbacks/__pycache__/logging.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/cli.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/dataset.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch creating /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/__pycache__/net.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/__pycache__/cli.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/__pycache__/dataset.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/__pycache__/setter.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/__pycache__/exceptions.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/__pycache__/regressor.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/__pycache__/scoring.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/__pycache__/__init__.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/__pycache__/classifier.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/__pycache__/history.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/__pycache__/helper.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/__pycache__/utils.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/__pycache__/toy.cpython-39.pyc -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/__pycache__ copying /build/skorch-0.9.0/.pybuild/cpython3_3.9/build/skorch/classifier.py -> /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/regressor.py to regressor.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/utils.py to utils.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/setter.py to setter.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/net.py to net.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/scoring.py to scoring.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/__init__.py to __init__.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/helper.py to helper.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/exceptions.py to exceptions.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/toy.py to toy.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/test_classifier.py to test_classifier.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/test_cli.py to test_cli.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/test_history.py to test_history.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/test_scoring.py to test_scoring.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/test_toy.py to test_toy.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/conftest.py to conftest.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/test_regressor.py to test_regressor.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/test_setter.py to test_setter.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/__init__.py to __init__.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/test_net.py to test_net.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/test_helper.py to test_helper.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/test_dataset.py to test_dataset.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/tests/test_utils.py to test_utils.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/history.py to history.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks/base.py to base.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks/scoring.py to scoring.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks/__init__.py to __init__.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks/lr_scheduler.py to lr_scheduler.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks/regularization.py to regularization.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks/training.py to training.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/callbacks/logging.py to logging.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/cli.py to cli.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/dataset.py to dataset.cpython-39.pyc byte-compiling /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch/classifier.py to classifier.cpython-39.pyc running install_egg_info Copying skorch.egg-info to /build/skorch-0.9.0/debian/python3-skorch/usr/lib/python3.9/dist-packages/skorch-0.9.0.egg-info Skipping SOURCES.txt running install_scripts debian/rules execute_after_dh_auto_install make[1]: Entering directory '/build/skorch-0.9.0' find debian -type f -name .coverage -delete make[1]: Leaving directory '/build/skorch-0.9.0' dh_installdocs -O-Spybuild dh_installchangelogs -O-Spybuild dh_python3 -O-Spybuild dh_installinit -O-Spybuild dh_perl -O-Spybuild dh_link -O-Spybuild dh_strip_nondeterminism -O-Spybuild dh_compress -O-Spybuild dh_fixperms -O-Spybuild dh_missing -O-Spybuild dh_installdeb -O-Spybuild dh_gencontrol -O-Spybuild dh_md5sums -O-Spybuild dh_builddeb -O-Spybuild dpkg-deb: building package 'python3-skorch' in '../python3-skorch_0.9.0-3_all.deb'. dpkg-genbuildinfo --build=binary dpkg-genchanges --build=binary >../skorch_0.9.0-3_armhf.changes dpkg-genchanges: info: binary-only upload (no source code included) dpkg-source --after-build . dpkg-buildpackage: info: binary-only upload (no source included) dpkg-genchanges: info: not including original source code in upload I: copying local configuration I: unmounting dev/ptmx filesystem I: unmounting dev/pts filesystem I: unmounting dev/shm filesystem I: unmounting proc filesystem I: unmounting sys filesystem I: cleaning the build env I: removing directory /srv/workspace/pbuilder/1393 and its subdirectories I: Current time: Thu Aug 26 00:42:21 -12 2021 I: pbuilder-time-stamp: 1629981741 Thu Aug 26 12:42:30 UTC 2021 I: 1st build successful. Starting 2nd build on remote node ff4a-armhf-rb.debian.net. Thu Aug 26 12:42:30 UTC 2021 I: Preparing to do remote build '2' on ff4a-armhf-rb.debian.net. Thu Aug 26 12:57:22 UTC 2021 I: Deleting $TMPDIR on ff4a-armhf-rb.debian.net. Thu Aug 26 12:57:24 UTC 2021 I: skorch_0.9.0-3_armhf.changes: Format: 1.8 Date: Thu, 26 Nov 2020 15:21:35 +0800 Source: skorch Binary: python3-skorch Architecture: all Version: 0.9.0-3 Distribution: unstable Urgency: medium Maintainer: Debian Deep Learning Team Changed-By: Mo Zhou Description: python3-skorch - scikit-learn compatible neural network library that wraps PyTorch Changes: skorch (0.9.0-3) unstable; urgency=medium . * Specify X-Python3-Version: 3.9 . * Make tests verbose and ignore test failure. Checksums-Sha1: c8fdf6022ac1170665ed79b4f0f7f5ddcb72ef22 96560 python3-skorch_0.9.0-3_all.deb e0cce0ff5aa985dd0c279573fdc11736fcdaaee1 14530 skorch_0.9.0-3_armhf.buildinfo Checksums-Sha256: 6d641ba74a66b9d124d42c7da6b14d7756ae69a7e52d84fdae06c90b8654a3e2 96560 python3-skorch_0.9.0-3_all.deb 62d5f3d7474dce8428e772eb1445e461b94ec406387f8f443282e7661c225dfc 14530 skorch_0.9.0-3_armhf.buildinfo Files: 9717392c53eceeffe2a1a1941b260205 96560 science optional python3-skorch_0.9.0-3_all.deb 81311b9f40da5e0ee2eb1e9401f809da 14530 science optional skorch_0.9.0-3_armhf.buildinfo Thu Aug 26 12:57:26 UTC 2021 I: diffoscope 177 will be used to compare the two builds: # Profiling output for: /usr/bin/diffoscope --html /srv/reproducible-results/rbuild-debian/tmp.la2qCii6Ou/skorch_0.9.0-3.diffoscope.html --text /srv/reproducible-results/rbuild-debian/tmp.la2qCii6Ou/skorch_0.9.0-3.diffoscope.txt --json /srv/reproducible-results/rbuild-debian/tmp.la2qCii6Ou/skorch_0.9.0-3.diffoscope.json --profile=- /srv/reproducible-results/rbuild-debian/tmp.la2qCii6Ou/b1/skorch_0.9.0-3_armhf.changes /srv/reproducible-results/rbuild-debian/tmp.la2qCii6Ou/b2/skorch_0.9.0-3_armhf.changes ## command (total time: 0.000s) 0.000s 1 call cmp (internal) ## has_same_content_as (total time: 0.000s) 0.000s 1 call abc.DotChangesFile ## main (total time: 0.750s) 0.750s 2 calls outputs 0.000s 1 call cleanup ## recognizes (total time: 0.021s) 0.021s 10 calls diffoscope.comparators.binary.FilesystemFile 0.000s 8 calls abc.DotChangesFile Thu Aug 26 12:57:28 UTC 2021 I: diffoscope 177 found no differences in the changes files, and a .buildinfo file also exists. Thu Aug 26 12:57:28 UTC 2021 I: skorch from bullseye built successfully and reproducibly on armhf. Thu Aug 26 12:57:29 UTC 2021 I: Submitting .buildinfo files to external archives: Thu Aug 26 12:57:29 UTC 2021 I: Submitting 16K b1/skorch_0.9.0-3_armhf.buildinfo.asc Thu Aug 26 12:57:30 UTC 2021 I: Submitting 16K b2/skorch_0.9.0-3_armhf.buildinfo.asc Thu Aug 26 12:57:32 UTC 2021 I: Done submitting .buildinfo files to http://buildinfo.debian.net/api/submit. Thu Aug 26 12:57:32 UTC 2021 I: Done submitting .buildinfo files. Thu Aug 26 12:57:32 UTC 2021 I: Removing signed skorch_0.9.0-3_armhf.buildinfo.asc files: removed './b1/skorch_0.9.0-3_armhf.buildinfo.asc' removed './b2/skorch_0.9.0-3_armhf.buildinfo.asc'